Information technology is becoming increasingly embedded in our society. This also means it is getting easier for us to collect more data about ourselves and our environment. The science concerned with the ‘discovery’ of information from large volumes of unstructured data is called ‘data science’. This field has high practical relevance, as the generation and application of information is an important economic activity in today’s world.

Artificial intelligence (AI) brings with it a promise of genuine human-to-machine interaction. When machines become intelligent, they can understand requests, connect data points and draw conclusions. Machine learning automates analytical model building. It uses methods from neural networks, statistics, operations research and physics to find hidden insights in data without being explicitly programmed where to look or what to conclude.

Global migration, combatting disinformation or creating smart cities – today’s complex issues call for a new generation of policymakers and problem solvers who understand how to combine policy and data to create solutions for government, civil society and business. In a world of changing and interlinked policy measures, data science and AI can provide policy makers with unprecedented insight: from identifying policy priorities by modelling complex systems and scenarios, to evaluating hard-to-measure policy outcomes.

Big data analytics is the use of advanced analytic techniques against very large, diverse big data sets that include structured, semi-structured and unstructured data, from different sources, and in different sizes from terabytes to zettabytes. Big data analytics helps organizations harness their data and use it to identify new opportunities. That, in turn, leads to smarter business moves, more efficient operations, higher profits and happier customers.

Advanced Analytics is the autonomous or semi-autonomous examination of data or content using sophisticated techniques and tools, typically beyond those of traditional business intelligence (BI), to discover deeper insights, make predictions, or generate recommendations. Advanced analytic techniques include those such as data/text mining, machine learning, pattern matching, forecasting, visualization, semantic analysis, sentiment analysis, network and cluster analysis, multivariate statistics, graph analysis, simulation, complex event processing, neural networks.

Databases are arguably still the most widespread technology for storing and managing business-critical digital information. Manufacturing process parameters, sensitive financial transactions, or confidential customer records – all this valuable corporate data must be protected against compromises of their integrity and confidentiality without affecting their availability for business processes. Database security includes a variety of measures used to secure database management systems from malicious cyber-attacks and illegitimate use.

Computational science is a key area related to physical mathematics. The problems of interest in physical mathematics often require computations for their resolution. Conversely, the development of efficient computational algorithms often requires an understanding of the basic properties of the solutions to the equations to be solved numerically. Numerical simulation enables the study of complex systems and natural phenomena that would be too expensive or dangerous, or even impossible, to study by direct experimentation.

Neuromorphic computing is a method of computer engineering in which elements of a computer are modeled after systems in the human brain and nervous system. The term refers to the design of both hardware and software computing elements. Neuromorphic computing works by mimicking the physics of the human brain and nervous system by establishing what are known as spiking neural networks, where spikes from individual electronic neurons activate other neurons down a cascading chain. It is analogous to how the brain sends and receives signals from biological neurons that spark or recognize movement and sensations in our bodies.

Cloud computing is the delivery of technology services-including compute, storage, databases, networking, software, and many more-over the internet with pay-as-you-go pricing. Cloud computing mainly makes it possible for companies to get their applications deployed faster, without the need for excessive maintenance, which is managed by the service provider. This also leads to better use of computing resources, as per the needs and requirements of a business from time to time.

Data science tools and the network science approach offer a unique perspective to tackle complex problems, impenetrable to linear-proportional thinking. Building on decades of development of fundamental understanding of networks, the modern data deluge has opened up unprecedented opportunities to study and understand the structure and function of social, economic, political and information systems. The concept of networks has become indispensable in the social, information, biological, and physical sciences. Data-driven network science aims at explaining complex phenomena at larger scales emerging from simple principles of network link formation.

Mathematics is the bedrock of any contemporary discipline of science. Almost all the techniques of modern data science, including machine learning, have a deep mathematical underpinning.

Deep learning is a type of machine learning and artificial intelligence (AI) that imitates the way humans gain certain types of knowledge. Deep learning is an important element of data science, which includes statistics and predictive modeling. It is extremely beneficial to data scientists who are tasked with collecting, analyzing and interpreting large amounts of data; deep learning makes this process faster and easier. In deep learning, a computer model learns to perform classification tasks directly from images, text, or sound. Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding human-level performance.

Statistical Learning and Data Science is a work of reference in the rapidly evolving context of converging methodologies. It gathers contributions from some of the foundational thinkers in the different fields of data analysis to the major theoretical results in the domain. On the methodological front, the volume includes conformal prediction and frameworks for assessing confidence in outputs, together with attendant risk. It illustrates a wide range of applications, including semantics, credit risk, energy production, genomics, and ecology.

Predictive Analytics is among the most useful applications of data science. Using it allows executives to predict upcoming challenges, identify opportunities for growth, and optimize their internal operations. There isn’t a single way to do predictive analytics, though; depending on the goal, different methods provide the best results. Predictive analytics is the area of data science focused on interpreting existing data in order to make informed predictions about future events.

Business Intelligence (BI) is a means of performing descriptive analysis of data using technology and skills to make informed business decisions. The set of tools used for BI collects, governs, and transforms data. It facilitates decision-making by enabling the sharing of data between internal and external stakeholders. The goal of BI is to derive actionable intelligence from data. BI has a permanent advantage over DS because it has concrete data points; few, simple assumptions; self-explanatory metrics; and automated processes.

Digital transformation touches all areas of business, including product innovation, operations, go-to-market strategy, customer service, marketing, and finance. However, digitization is not only about the acceleration of business processes and leveraging new opportunities. With data science capabilities, you may identify how to digitally transform your business and which business areas require transformation. Data-driven decision making is more effective and realistic as the decisions are based on actual information and not assumptions. One of the important aspects of data visualization is that it does not just take into consideration past data, but also anticipates the future based on various holistic factors.

Data scientists have changed almost every industry. In medicine, their algorithms help predict patient side effects. In sports, their models and metrics have redefined “athletic potential.” Data science applications have even tackled traffic, with route-optimizing models that capture typical rush hours and weekend lulls. The most cutting-edge data scientists, working in machine learning and AI, make models that automatically self-improve, noting and learning from their mistakes. Data science, often known as data-driven science, combines several aspects of statistics and computation to transform data into actionable information. Data science combines techniques from several disciplines to collect data, analyze it, generate perspectives from it, and use it to make decisions.

Data management is far from static, and in the new decade, every data-driven organization must find ways to collect, analyze and make business sense of their ever-growing data assets.Organizations are increasingly adopting multicloud strategies and moving their workloads and data in the cloud. Data will be housed somewhere between both on-premises and in the cloud. Managing this scattered data across multiple sources, formats, and deployments is a challenge organizations will realize in 2022. This will lead businesses to reimagine their data management strategy to adopt a hybrid data management approach with the aim to connect and manage data irrespective of where it resides.

Visualization is the first step to make sense of data. To translate and present complex data and relations in a simple way, data analysts use different methods of data visualization — charts, diagrams, maps, etc. Choosing the right technique and its setup is often the only way to make data understandable. Vice versa, poorly selected tactics won’t let to unlock the full potential of data or even make it irrelevant. Selecting the most powerful tool available isn’t always the best idea: Learning curves can be steep, requiring more resources to just get up and running, while a simpler tool might be able to create exactly what’s needed in a fraction of the time. Remember, though, that the tool is only part of the equation in creating a data visualization; designers also need to consider what else goes into making a great data visualization.

Data Science is a broad field and advancing at a tremendous pace. Every few months new research, models, and advances are announced. For data science practitioners, it’s essential to keep abreast of the latest advances. However, given the demands on our time this can be a daunting task. The Research Frontiers track is the first of its kind. You don’t have to parse the contents of countless papers or attend academic conferences; instead, we bring the most relevant information to you. World-class academics, researchers, and professionals summarize the latest research across focus areas, and detail what’s important.