Giant social media company Meta (previously Facebook) has unveiled its plans to build a powerful AI supercomputer, called the AI Research SuperCluster (RSC) — which Meta claims is already among the fastest such supercomputers in the world.
The company stated that this would be the fastest AI supercomputer in the world once completed by mid-2022.
According to Meta, RSC will be used to build AI models capable of learning from trillions of samples and working across different languages. The supercomputer will also develop new augmented reality tools to analyze text, images, and videos that can be used to detect hate speech and fake news on Facebook’s family of apps.
Additionally, RSC will be used to develop the metaverse, with Meta stating:
“Ultimately, the work done with RSC will pave the way toward building technologies for the next major computing platform — the metaverse, where AI-driven applications and products will play an important role.”
AI, short for artificial intelligence, is currently used for translating text between languages and identifying prohibited or harmful content. However, to perform such tasks at scale, AI supercomputers are required. In short, AI supercomputers are high-speed computers used to train artificial intelligence models and machine learning systems.
“With RSC, we can more quickly train models that use multimodal signals to determine whether an action, sound or image is harmful or benign,” Meta said. “This research will not only help keep people safe on our services today, but also in the future, as we build for the metaverse.”
It is worth noting that Meta seemingly aims to develop the metaverse using this centralized computer. This does not align with crypto’s core ethos of decentralization, which partly explains why the community is not optimistic about Meta building the metaverse.
Meanwhile, in terms of processing power, RSC’s phase one, which is already running, “comprises a total of 760 NVIDIA DGX A100 systems as its compute nodes, for a total of 6,080 GPUs [graphics processing units] — with each A100 GPU being more powerful than the V100 used in our previous system,” Meta detailed in another post.
Once phase two of the computer goes live, it is expected to contain approximately 16,000 total GPUs and would be able to train AI systems “with more than a trillion parameters on data sets as large as an exabyte.”
Source: Cryptonews