As AI continues to advance, it will continue to permeate every layer of society.
The most noticeable example has been with the release of OpenAI’s ChatGPT last year, but for years AI has been deployed across the board in sectors and industries ranging from defense and security to education to retail and for a variety of uses ranging from detecting anomalies to offering product recommendations.
With generative AI top of mind, but not the only hot item in the space, data scientists, data engineers, entrepreneurs, policy makers, students, and academics from around the world will meet in Tel Aviv this week for AI Week to exchange knowledge and ideas on the technologies that are rapidly changing our world.
“It’s not enough to develop the technology by itself. You have to develop the whole ecosystem,” said conference chair Maj. Gen. (Ret.) Prof. Isaac Ben Israel.
Tasked with preparing Israeli society for AI, Maj. Gen. (Ret.) Prof. Isaac Ben Israel started AI Week, a mix of a multi-day conference along with satellite events, four years ago to bring together the whole ecosystem.
“It’s not only technology by itself, or computer science by itself, or machine learning, etc., but also decision makers who contribute to our lives in healthcare, transportation, agriculture, even food, almost anything in our life. So it’s an interdisciplinary conference,” continued Ben Israel, who is director of Tel Aviv University Blavatnik Interdisciplinary Cyber Research Center and co-head of Israel’s AI Initiative.
This year will see thousands of participants from around the world attend the event in Tel Aviv in-person as well as virtually, he added.
Topics that will be covered include Computer Vision, AI in Defense, AI in Health, NLP, ML Theory, Recommender Systems, and more.
Ahead of the conference, CDInsights spoke with a few of the speakers.
Here are some highlights:
Session: Scalable Trustworthy AI – Beyond “what,” towards “how”
Seong Joon Oh, Leader of the Scalable Trustworthy AI group at the University of Tubingen, will give an overview of his previous search for such ingredients that make models more explainable and more robust to distribution shifts. He will then discuss exciting future sources of such ingredients.
“People should leave my session thinking that they should change the way they collect training data and that we should acknowledge that our models do not have the right characteristics like trustworthiness or a lot of alignment with human intention. AI initiatives are trying to transfer knowledge from the human domain to the computational domain. The best method we have now is through annotation, but we should be trying to collect more information from humans, from the annotators, for example.”
Session: AI in Health Care: Promise Meets Reality
Professor Margaret Brandeau (Stanford), a Professor in the School of Engineering and a Professor of Medicine, will discuss what is needed in order for AI to be successfully integrated into healthcare systems and the next steps that can be taken to advance implementation.
“Our ultimate goal for our AI projects is to have some automated decision making in our hospital. I’m going to talk about the five steps we’ve identified that every successful AI project goes through. We define the success of a project as sustained measured value.
“First, you have to have stakeholder buy in, then you have to have solved the problem, implement your algorithm or expert system, you have to sustain the use of the system by the people for whom it’s been built, and finally, you have to measure the value.
“I’ll present four projects we did at the Lucile Packard Hospital Stanford: three made it through part of the chain, and one has made it all the way where its use is sustained and the value it creates has been measured.”
Session: Power to the People: A New Framework for Content Moderation and Governance for Internet Platforms
Anjali Joshi, Board Director Lattice Semiconductor, Alteryx; Executive in Residence, INSEAD, will discuss an alternative, decentralized framework for AI-enabled content moderation that uses a more tailored algorithm augmented with a contextual layer adapted to the type of content that a given community finds acceptable.
“The idea is that in real life, people in different countries have different rules by which they operate. And so to apply one model, across the entire world, is obviously not going to work. And it’s going to flag both false positives as well as false negatives in terms of the content. So, that’s the thesis of the talk, which is to now think about not building such huge single models across everything, but to build smaller contextual models, which can be used to moderate communities.”
Related: 6Q4: Responsible AI Institute’s Seth Dobrin on Attesting that Your App Does No Harm
Session: Is your Computing Infrastructure Ready for the Next Wave of AI Research?
Dr. Ronen Dar, CTO and Co-founder of Run:ai, will discuss best practices for architecting, managing, and maintaining computing infrastructure for AI research in the next years to come.
“In my talk, I focus on the problem of GPU utilization and how costly it is today to develop AI. And it’s a trend; the cost only increases and increases very fast. I’ll go into the details of the problem of why GPU utilization is so low in training farms and when people train AI models. Why it’s a problem when teams are deploying AI models in production and the utilization is low. I’ll get into the problems, and then I’ll briefly share what we do at Run:ai.”
Related: AI Workloads Need Purpose-built Infrastructure
Day 3 talks
Day 3 will be virtual and includes five tracks: Ethics, Biomed, Theory, AI Applications, and Recommender Systems
9:20 am: EU Regulation of AI-Based Personalization
Dr. Alžběta Solarczyk Krausová, Head of the Center for Innovations and Cyberlaw Research (CICeRo), Institute of State and Law, Czech Academy of Sciences
9:35 am: Artificial Intelligence in Human Reproduction: Ethical Aspects of AI in IVF
Dr. Sivan Tamir, Researcher, The International Center for Health, Law and Ethics, University of Haifa; Head of Bioethics & Genetic Policy Unit, KSM Research and Innovation Center
10:15 am: Towards Automated Diagnosis of Disease-Related Risk Factors in 3D Medical Imaging Data
Dr. Oren Avram, Postdoctoral Researcher, Department of Computational Medicine, UCLA
11:00 am: Disrupting Drug Development using Multi-Modal Deep Learning and Patient-on-a-Chip Platform
Shahar Harel, Head of AI, Quris
12:20 pm: Dimensionality Reduction: Theory and Practice
Dr. Ora Fandina, Research Fellow, The ML & Algo group, CS Department, Aarhus University in Denmark
13:05 pm: Efficient Risk Averse Reinforcement Learning
Ido Greenberg, Ph.D. Candidate, Technion
14:05 pm: Curating Billion Image Datasets for Improving Model Quality
Dr. Amir Alush, CTO, Visual Layer
14:40 pm: Adapting Transformers for Recommender Systems (without any text!)
Tzoof Avny Brosh, Senior Machine Learning & NLP Researcher, Microsoft
15:25 pm: Combating Cold Start on a Large Scale- Evaluation Framework for Cold-Start Techniques in Large-Scale Production Settings
Moran Haham, Algorithms Manager, Outbrain
*By Lisa Damast and Elisabeth Strenger