AI explained: Deepfakes

From use case to infrastructure

Digitalization, production, customer relationships, transport, sensors, artificial intelligence (AI) and connected cars are creating increasing amounts of data. Successfully processing this usually unstructured information can unlock key insights and result in a major competitive advantage.

Furthermore: Big data analytics allows companies to draw important conclusions for their own business.

AI einfach erklärt: Deepfake

#team-netapp

Uncovering insights

  • Which strategy should we follow?

  • Which products will be popular in the future?

  • Which machines will fail soon?

  • Which processes are inefficient and in need of optimization?

One thing is for sure: Big data can deliver valuable insights. But for this to happen, companies need a powerful infrastructure that is capable of digging up this data treasure. Many businesses already use the cloud to quickly create prototypes for new projects, collect AI data at the point where it is created, and either store it in their own data center or process it further.

Getting insights even faster with data science and artificial intelligence

Thanks to artificial intelligence and deep learning, we can now analyze data at an unprecedented speed. This delivers insights within just a few minutes in a way that previously would have taken days, weeks or even months. Despite this, we must not forget that artificial intelligence is based on human input. If these are incorrect for any reason, it is reflected in the result.

Deepfakes – the power of data

Have you ever heard of deepfakes? The term is a combination of deep learning, i.e., machine learning, and fake. A deepfake is when a person acts in front of a camera, but their entire appearance is recalculated so they look and sound like someone else, such as Donald Trump, Michael Caine, Al Gore, Mark Zuckerberg or Barack Obama. Deep learning involves the collection and analysis of huge amounts of data involving people, including photos, videos and audio. The more data that is available, the better a potential deepfake can be. Many of these fakes are incredibly convincing, with laypersons no longer capable of telling apart the real person from their digital doppelgänger.

The danger of deepfakes and the responsibility that comes from using them became clear in 2015, when a video of the Greek finance minister Yanis Varoufakis was faked to show him raising his middle finger. This led to a huge political scandal. When the German satirist Jan Böhmermann later claimed that he was responsible for the fake, barely anyone believed him.

Big data – the positives outweigh the negatives

We now know the big responsibility that comes with using big data. But we need to remember that big data and artificial intelligence can power many technological innovations and simplify processes that would normally be long and time-consuming. These technologies’ time has therefore come. The major cloud providers have recognized this: They offer corresponding cloud-based AI services and platforms that are easy to order and integrate.

Challenges in AI infrastructure

Integrating AI without external support is a complex affair. Compiling and integrating the standard components for computing, storage, network and software for deep learning can increase complexity and lengthen the time needed for implementation. As a result, data scientists waste valuable time on system integration.

 

Achieving predictable performance that is also scalable is difficult. Best practice for deep learning shows that companies should start small and scale up their resources over time. Traditionally, computing and direct-attached storage have been used to feed data into AI workflows. But scaling with conventional storage can lead to interruptions and downtimes in ongoing processes.

 

Interruptions impact the productivity of the data analysts. Deep learning infrastructure involves countless dependencies between hardware and software. Maintaining a functioning deep learning infrastructure therefore requires comprehensive expertise in AI. Downtimes or slow AI performance can trigger a chain reaction that affects developer productivity and causes operating costs to skyrocket.

NetApp and NVIDIA – a strong team

Now, you can unlock the full potential of AI and deep learning. You can simplify, accelerate and integrate your data pipeline with the NetApp ONTAP AI proven architecture, supported by NVIDIA DGX systems and NetApp all-flash storage with cloud integration. A data fabric architecture lets you reliably improve the data path between the point where it is created, the data center and the cloud, while also accelerating analyses, training and inference.

NetApp and NVIDIA – driving innovation together

The heart of ONTAP AI is the DGX A100 system. It is the universal building block for data center AI, supporting deep learning training, inference, data science and other high-performance workloads via a central platform. Each DGX A100 system has eight NVIDIA A100 Tensor Core GPUs and two second-generation AMD EPYC™ processors. The latest ultra-fast NVIDIA Mellanox ConnectX-6 interconnects are integrated, which are compatible with 100/200 Gb Ethernet and InfiniBand. You can accelerate multiple small workloads by partitioning the DGX A100 system into up to 56 instances per system using the latest GPU multi-instance technology from NVIDIA. This acceleration makes allocating GPU performance in ONTAP AI extremely efficient. It allows data scientists throughout your business to perform faster iterations, automate reproducibility and complete AI projects up to three months earlier with higher quality.

 

NetApp All Flash FAS systems provide quick and flexible all-flash storage based on end-to-end NVMe technologies in order to maintain the flow of data to deep learning processes. Mellanox Spectrum Ethernet switches are also integrated in the ONTAP AI solution. These offer the low latency, high density, high performance and energy efficiency required in AI environments.

Why NetApp for artificial intelligence?

The use of AI technology creates huge amounts of data that needs to be smoothly transported through a five-stage pipeline. Even the tiniest bottleneck can stop the project in its tracks. This is why the combination of NetApp ONTAP AI, NVIDIA DGX servers and cloud-connected all-flash storage from NetApp delivers excellent AI performance, from data generation and analysis all the way to archiving.

The key benefits
  • Lower risks thanks to flexible, validated solutions

  • Elimination of complex designs and uncertainties

  • Optimized configuration and implementation through available, preconfigured solutions

  • Delivers the necessary performance and scalability

  • Start small and benefit from uninterrupted growth

  • Intelligent management of data through integrated pipeline between point of creation, data center and cloud

  • Implementation on a solution supported by AI expertise and simple support options

  • Standardized workloads

  • Elimination of infrastructure silos

  • Flexible reaction to business requirements

Any Questions?

If you would like to know more about this subject, I am happy to assist you.

Contact us
Marc Riedel
Partner Manager