Artificial Intelligence and Machine Learning

  • Computational Linguistics (Show more)
    Computational Linguistics (also called Natural Language Processing - NLP) is a field at the crossroad between Computer Science and Linguistics. The ultimate goal in our field is to understand the nature of human languages through computational models. Modern NLP relies heavily on machine learning and statistical models to explain processes observed in natural languages.
    Applications include machine translation, natural language inference (extracting information from text), text generation, automatic summarization, question-answering. Recent developments in neural networks have had significant impact in the field and have become key enabling techniques in NLP. Our research lab specializes in processing of the Hebrew language. We also work actively in automatic summarization, syntactic parsing and semantic text analysis. We recently started investigating applications of vision-language grounding, looking at ways to describe images in natural language, and answering questions about the content of images.
  • Computational and Computer vision (Show more)
    The goal of computer vision is visual inference; i.e., extracting information, or drawing conclusions, from visual data such as pictures or videos. More loosely speaking, it is about teaching computers to see. The field combines elements from computer science, mathematics, statistics, engineering, physics, cognitive sciences and more. Real-world applications are omnipresent as is evident by the high demand for computer-vision researchers in both Academia and the industry. Examples include automatic face recognition and an autonomous car using visual sensors to avoid collisions.
  • Data Mining
  • Distributed Constraints (Show more)
    Distributed Constraints Satisfaction Problems (CSPs), are composed of a set of variables which are distributed among agents. The variables are connected by constraints which define the constraints network among the agents. As a result, the search algorithm for solving these problems is a distributed algorithm, run by agents that communicate by sending and receiving messages. In general, messages contain information about assignments of values to variables and refutations of assignments, by agents that have no compatible assignment to their own variables. A natural extension to distributed constraints satisfaction problems are distributed constraints Optimization problems (DCOPs). DCOPs have valued constraints, so that any compound assignment of constrained agents is associated with a finite cost (or gain). The goal of a search algorithm for DCOPs is to find the optimal solution. Most commonly the Utilitarian objective function which is the sum of all costs of all agents.
    Typical example problems are a timetable for courses in a university, where several departments cooperate for a single curriculum, or for employees in a large working place such as a hospital or an ER where several wards cooperate in satisfying important constraints of medical staff. More interesting recent examples incorporate non-cooperative agents who search for a good solution for a multi-agent game.
  • Evolutionary Computation (Show more)
    In Evolutionary Computation (EC), core concepts from evolutionary biology—inheritance, random variation, and selection—are harnessed in algorithms that are applied to complex computational problems. The field of EC, whose origins can be traced back to the 1950s and 60s, has come into its own over the past decade. EC techniques have been shown to solve numerous difficult problems from widely diverse domains, in particular producing human-competitive machine intelligence.
  • Machine Learning (Show more)
    Machine Learning is the science of automatic computerized learning from examples. The field of machine learning aims to discover methods and algorithms that allow converting data to knowledge. Data can be of any form: it can be images, sounds, health records, stock market prices, traffic routes, user behavior, and so on. Machine learning algorithms use this data to infer rules that allow making predictions, identifying patterns, and organizing information. For instance, a Machine Learning algorithm can be used to infer from data about user behavior what is the best ad to present to the next user, or to predict the weather tomorrow based on today's weather along with historical weather data. Machine Learning is used today by all the major hi-tech companies and many start-ups. It is also used for scientific purposes in a wide range of applications, from identifying sub-atomic particles in a physical experiment to automatic diagnosis of diseases based on MRI images.
  • Planning and Reasoning (Show more)
    Planning is a core field of artificial intelligence (AI), wherein a system uses its knowledge about a domain to reason about the state of the world as a result of its actions, and choose actions so as to achieve certain goals. As many domains in the real world are inherently uncertain, this often requires the capability for reasoning and decision-making under uncertainty. Hence this field of AI involves techniques for modeling domains and potential actions, including graphical uncertainty models, search, automated generation of heuristic functions, and numerous other core AI techniques; and has wide-ranging applications from intelligent software (including "web") agents, all the way to physical robots.
  • Robotics (Show more)
    In robotics we are interested in making physical agents, robots, that can perform tasks in the real world. Our group is particularly interested in autonomous robots, robots that can make decisions and do not run pre-programmed scripts, but rather select their actions based on their current situation and goal. We seek methods for making it easier to program such robots and algorithms that support autonomous behavior.

Computer Systems, Communication, and Software Engineering

  • Communication Networks and Algorithms (Show more)
    The area of communication networks and algorithms is wide and popular nowadays. It has motivated several research sub-areas, some of which are studied at our Department.

    Two its theoretical sub-areas are studied by methods of distributed computing and computational geometry. They model communication networks either via abstract graphs or by a collection of points in the plane (e.g., routers, antennas). In the latter case, each point has its own transmission/reception radius. Once modeled in one of these ways, network problems are cast as either graph-algorithmic or geometric ones. These problems are then analyzed in either centralized setting, i.e., when there is a single processor that knows the entire input and is supposed to produce the output in its entirety, or in distributed one, when processors with only local information reside in every vertex/point. Each of these settings gives rise to a rich and vivid research area.

    Another, mostly applicative wide sub-area of networking grew from the Internet studies. Due to the classic Internet implementation, most of it is based on the distributed model. Another novel direction is based on the prospective concept of Software-Defined Networks (SDNs), in which each such SDN is mostly controlled in a centralized way and the network elements are directly programmable. SDN is quickly being adopted by the enterprise and network vendors. For example, Google, Symantec, and more use either SDN or their own custom variant. While Cisco, VMWare, and other vendors provide SDN enabled network elements. Such centralized and dynamic networking gives rise to new interesting problems. One of them is a seamless routes replacement. It is studied at our Department based on Make&Activate-Before-Break approach, which supports seamless routes updates.
  • Databases and storage systems
  • Distributed Systems and Computations (Show more)
    At the age of cloud computing, distributed systems have become part of the daily lives of all of us, let it be government, business, or social aspects of our lives. As the algorithms and software become more complex, the computational tasks grow, and so is our demand for computational power and efficiency. At the same time, computer manufacturers have reached the limit of single processor's capabilities. As a result, parallel and distributed systems have become our only way to increase our computing capabilities. Most distributed systems such as cloud services, Internet servers, and multi-core computations surprisingly share the same properties such as careful synchronization, load balancing, and efficient usage of hardware resources.
  • Programming languages (Show more)
    The field of programming languages is concerned with the study, design, implementation, and use of programming languages.

    The kinds of questions asked in this field include: What makes programming languages powerful, expressive, and convenient to use? What language features can be implemented easily and efficiently? What language features can be reduced to other, more basic features? How best to implement programming languages in compilers and interpreters? What features of a language enable the compiler to generate efficient code? Given a program written in some language, what can we know about the program before/without running it? What mathematical objects correspond to computer programs, and what can we prove about their properties? How can correct programs be synthesized from formal proofs about their properties?

    Research in programming languages is very diverse in character: There are theoretical questions, there are applied questions, there are questions that touch upon the logical foundations of computation, and there are questions that concern expressibility, methodologies, frameworks, and system building.
  • Self-Stabilzation (Show more)
    An important concept to theoreticians and practitioners in computing, distributed computing and communication networks, refers to the system's ability to recover automatically from unexpected faults. The research focuses on algorithms that starting from any arbitrary state, allow the system to recover from the faults and bring it back to a correct state. An additional goal of this research area is to design self-stabilizing systems.
  • Software engineering and verification (Show more)
    Would you ride in an autonomous vehicle you coded the software for? The research questions in the field of software engineering and verification are derived by the need to increase the reliability of programs/computerized systems and the assurance of their correctness. For instance, verification is concerned with providing automatic techniques for proving that systems adhere to their formal specification. Synthesis is concerned with providing aids to automatically generate parts of systems/programs that are correct by construction. In spite of the high complexity of the algorithms for solving these questions, verification tools are being used on a daily basis in companies such as Intel and Boing who cannot afford the risk of the existence of bugs. This field of study offers the opportunity to work on challenging open theoretical questions as well as to impact the ways systems are being designed in industry.

Cyber Security

  • Anomaly detection (Show more)
    Anomaly detection differs from ordinary supervised classification in that typically, during the training phase, the learning algorithm only observes “normal” examples -- and yet is expected to detect “anomalous” ones if they appear during the testing phase. This presents, first and foremost, a philosophical problem, typical of the unsupervised setting: What’s to stop a learner from trivially labeling every single instance as “normal”? Nevertheless, this problem setting is of considerable importance in real-life problems, and I am regularly faced with its various manifestations in the course of consulting companies such as Deutsche Telekom, EMC, Paypal and IBM.
  • Biometrics (Show more)
    Now, more than ever, cyber security is also about the verification and identification of individuals for physical or cyber access control, and in this quest, biometrics has become a primary tool. As a scientific and technological field dedicated to measuring human characteristics, the security that biometrics provides always juggle between robustness, reliability, portability, and affordability. Research directions in this field in the department lies at the intersection of computational sciences, neuroscience, and computer vision in order to optimize all these aspects of biometrics simultaneously towards a foolproof, portable, and affordable methods for individual verification and identification in cyber systems.
  • Computer security (Show more)
    Since the mid-20th century computing power has grown exponentially. We all feel the advantages in our daily lives, but the drawback is that we are becoming much more dependent on computers. In the early days functionality was considered more important than security and therefore many of the systems are vulnerable to cyber attacks. Computer security research address this issue from all levels: Hardware (e.g., Spectre, and Raw Hammer); Software (e.g., secure develpoment); Privacy (e.g., homomorphic encryption and other solutions) and more.
  • Cryptocurrencies (Show more)
    A cryptocurrency is a form of digital money that does not require a central authority (such as a bank). Modern cryptocurrencies (most notably Bitcoin) are based on the pioneering work of Satoshi Nakamoto. Nakamoto designed protocols that allow to achieve consensus on the state of the blockchain which is a public decentralized ledger that records all the transactions in the system.

    Since Bitcoin was launched in 2009 by Nakamoto, cryptocurrencies have accumulated a market capacity of several hundred billion dollars and attracted massive attention from governments, industry and academy. Nevertheless, cryptocurrencies are far from being a common and standard means of payment and there are many obstacles that must be overcome to reach this goal.

    Research directions in this domain include enhancing the scalability and efficiency of cryptocurrencies as well as improving their security against various types of attacks.
  • Cryptography and Privacy (Show more)
    Cryptography and privacy are central areas of research in cyber security. Cryptography aims to protect parties from attackers that attempt to eavesdrop to their communication or modify it. Cutting-edge research in cryptography is also devoted to more advanced features such as secure multiparty computation that allows parties to jointly compute a function of their inputs while making sure their inputs remain private.

    With the proliferation of information technologies and big data analytics, preserving privacy is an increasingly challenging task. One of the main goals of research in data privacy is to protect an individual's personally identifiable information in large databases that contain sensitive information (such as medical records), while preserving the utility of this data (for purposes such as medical research). The main formal mathematical framework developed for this purpose is differential privacy, which is a very active area of research and is also in the initial stages of deployment in practice.
  • Data Security (Show more)
    Data security is part of the computer security or cyber security area. It deals with protecting data in databases and in the cloud. We develop cryptography based techniques to protect such data while enforcing different access control policies. The field also includes the topic of protecting the cloud from malware penetration and the topic of security and privacy in Social networks.
  • Image Forensics (Show more)
    Visual content, images or videos, dominates our world not only because it is rich (after all, "a picture is worth a thousand words") but because often we tend to believe that "seeing is believing". This approach has been at the basis of statutory procedures also, allowing images to serve as admissible evidence, as long as they are original. But with sophisticated image editing tools such as Photoshop and computer vision techniques such as image inpaintings and augmented reality, seeing is no longer believing and visual content can definitely quality as "fake news". Image forensics attempts to study how to tackle such frauds and in particular, how one can authenticate digital images and other visual content.
  • Quantum Cryptography (Show more)
    Cryptography is the science of dealing with adversaries in computational settings. Many times, it turns lemons (intractable computational problems) to lemonade (useful cryptographic protocols, such as encryption schemes). Quantum computing dramatically changes the landscape of cryptography for two distinct reasons: the cryptographic protocols are not secure since the underlying intractable problem becomes tractable for quantum computers; and some tasks that cannot be achieved classically can be achieved using quantum computers due to quantum effects, such as unconditionally secure encryption scheme (Quantum Key-Distribution), and unforgeable quantum money.
  • Social Network Analysis (Show more)
    Complex networks in general, and social and technological networks in particular, have become the focus of intense research, mainly due to the widespread availability of data resulting from on-line social networks (OSNs) and other Internet applications. These networks are often characterized by a hierarchical structure, heavy tail degrees distribution, and the small-world property, meaning that the mean distance between pairs of nodes is small relative to the network's size.

    Complex network analysis tools, such as community detection and link analysis algorithms, are used by a wide range of applications. In our cyber security research, we develop and apply tools for complex networks analysis in order to detect malicious entities, such as files, machines, accounts or Internet domains, based on the patterns of their interactions.
  • Trust and Reputation (Show more)
    The issue of trust is part of the general cyber security area. It involves technical issues like trusting the authentication process, or trusting a third party for performing secure computations. It also includes social and privacy issues like evaluating the reputation of people profiles or posts in a social network. Recently we conducted research in using reputation models for detecting malicious internet domains.

Interdisciplinary Research

  • Bionformatics
  • Computational Science and Engineering (Show more)
    Computational science and engineering (also called scientific computing) is a multidisciplinary field that uses advanced computing capabilities to understand and solve complex problems arising in various applications.Computational science frameworks include the development of numerical algorithms to solve various problems, methods to extract knowledge from large scientific data, and methods to model and simulate natural phenomena. The three main mathematical tools that dominate the field are linear algebra, mathematical optimization, and partial differential equations. The research in this field is often coupled with high-performance computing.
  • Computational and Biological Vision (Show more)
    Vision is arguably the important of all senses, without which our life would be qualitatively different. For a wide range of applications, from "seeing robots" to the restoration of sight for the blind, it is indispensable to understand vision as an information processing mechanism. Such an inquiry is the primary task of Computational Vision , a scientific discipline that explores and studies vision from an interdisciplinary computational point of view both for the automatic analysis and interpretation of visual signals (i.e., images and videos) and for explaining and understanding biological, and in particular, human vision. Unlike Computer Vision that was mentioned above (which is a field of inquiry that is largely detached from other vision sciences), exploring vision as a whole must involve an interdisciplinary research where computational inquiry goes hand in hand with behavioral, cognitive, and neuroscience explorations. We therefore complement the development of algorithms for the automatic analysis of images and videos with computational modeling of visual functions, behavioral and psychophysical experimentation with humans and animals, and the computational exploration of physiological and anatomical aspects of visual cortical regions.
  • Computer Music
  • Network Science (Show more)
    Network Science deals with the analysis of Complex networks such as social networks and technological networks. These networks are often characterized by a hierarchical structure, heavy tail degrees distribution, and the small-world property, meaning that the mean distance between pairs of nodes is small relative to the network's size. The research on this area is quite diverse and include both the analysis of complex networks from data and the study of theoretical models aiming to understand how these complex networks gain these properties.

Theory of Computer Science

  • Algorithmic Game Theory (Show more)
    Algorithmic game theory is a relatively new field connecting between computer science and economics. A main driving force behind this field was the Internet. Suddenly, there was a need to efficiently handle large scale complex economic markets (ad auctions and spectrum auctions are well known examples). Algorithmic game theory addresses this need by combining the vast experience of economics in designing markets with the expertise of computer scientists in efficiently handling large computerized systems. The scope of the research has broaden to include diverse topics such as algorithmic mechanism design, price of anarchy, social choice, social networks and equilibrium computation. Tools from algorithmic game theory were applied to the modeling and analysis of various social phenomena (opinion formation is one example).
  • Algorithms (Show more)
    Algorithms is a core area of computer science, concerned with the design, analysis and implementation of problem-solving methods. The foundations of any computer system, whether it is a home desktop, a datacenter, the Internet, or a deep neural network, relies on sound algorithmic ideas. The quantity of digital data continues to grow at an exponential rate, which accentuate the need for faster and more accurate algorithms. The goal of our research group is to provide scalable solutions to a wide array of problems in various settings. We also study which algorithmic objectives are impossible, by proving lower bounds.
  • Coding Theory (Show more)
    Coding theory, deals with the design and the study of the properties of error-correcting codes. Error-correcting codes besides error-correction have many different applications like data compression, cryptography, networking lower bounds. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science. Our research is focused on designing codes for interactive communication and on locally decodable codes.
  • Complexity
  • Cryptography (Show more)
    Modern cryptography provides algorithms and protocols for protecting honest parties from distrusted or malicious parties that attempt to eavesdrop to communication or modify it. It plays an increasingly dominant role in modern life due to the deployment of new technologies that expose users to new potential threats.

    Cryptography is a very broad field of research that combines both theory and practice. Our research spans diverse sub-areas of cryptography such as secure multiparty computation (which aims to develop protocols that allow parties to jointly compute a function of their inputs while making sure the inputs remain private) and cryptanalysis (whose goal is to devise methods to evaluate the security of ciphers, thus ensuring that only the most secure ones are deployed).
  • Data Privacy
  • Distributed Algorithms (Show more)
    In distributed algorithms we study problems modeled by a graph, whose vertices host processors. These processors communicate with one another along communication links, modeled by edges of the graph. Initially, each vertex knows only its own part of the graph, and ultimately they need to solve some common task, such as to compute a minimum spanning tree, or to color the graph with a few colors. The objective is to perform this efficiently, both in terms of running time and in terms of total communication. This is a very active area. It has rich mathematical theory and numerous open problems, and practical applications to the world of communication networks.
  • Logic and Semantics
  • Low Distortion Embeddings (Show more)
    The main objective in the field of metric embedding is to understand how well can arbitrary “complex” metric spaces be represented in “simpler” spaces. Typical simple spaces can be a low-dimensional space, Euclidean space, a tree, a sparse graph, etc.. L ow-distortion embeddings provide a powerful and versatile toolkit for solving algorithmic problems, applicable in a variety of settings, such as: approximation and online algorithms, computational biology, machine learning, computer vision and many others.
  • Quantum Computing (Show more)
    Quantum computers are computing devices that use quantum mechanics to their advantage. Quantum computers (if they are to be built) could solve certain computational problems much faster than classical computers. Fundamentally, the questions we are trying to answer are: what are the computational tasks for which quantum computers are good for? What types of tasks quantum computers are not good for?
  • Theorem proving and type theory

Vision, Graphics, and Geometry

  • Augmented Reality
  • Computational Geometry (Show more)
    Computational geometry is concerned with algorithms and data structures for geometric objects. The primary goal of research in computational geometry has been to develop efficient algorithms and data structures for solving problems stated in terms of basic geometrical objects, such as points, line segments, polygons, polyhedra , arrangements (of lines, planes, geometric shapes), etc. During the years computational geometry has evolved and took diverse paths to many applications, such as robotics (collision free motion planning), computer graphics and computer vision (ray tracing, shape obscuring), facility location (locating antennae on terrains), and more. In parallel to theoretical algorithms new algorithmic paradigms are being used, such as geometric optimization, random sampling, and randomized geometric algorithms.

    Our strong group in computational geometry is involved in research in, e.g., combinatorial issues of computational geometry, Euclidean Steiner trees, approximation algorithms, devising efficient geometric data structures and algorithms for real world problems, and more.
  • Computational and Computer Vision (Show more)
    The goal of computer vision is visual inference; i.e., extracting information, or drawing conclusions, from visual data such as pictures or videos. More loosely speaking, it is about teaching computers to see. The field combines elements from computer science, mathematics, statistics, engineering, physics, cognitive sciences and more. Real-world applications are omnipresent as is evident by the high demand for computer-vision researchers in both Academia and the industry. Examples include automatic face recognition and an autonomous car using visual sensors to avoid collisions.
  • Computer Graphics (Show more)
    Computer graphics studies the manipulation of visual and geometric information using computational techniques. It focuses on the mathematical and computational foundations of image generation and processing while also considering aesthetic issues. Research in computer graphics focuses on geometry acquisition and processing, on interactive techniques, and on related areas such as computer vision, machine learning and AR/VR.
  • Imaging Sciences (Show more)
    Imaging sciences is a broad field that involves the processing, analyzing, reconstructing, compressing and visualizing of digital images and videos. A digital image is a numeric representation of a two-dimensional image, using a grid of values called pixels that represent the color of the image at any specific point. Many imaging applications involve with given (corrupted) data that is related to an unknown image, and the goal is to reconstruct the unknown image using mathematical algorithms and software.