Facebook’s AI design is now open source for everyone, since the social network announced this week that it intends to share the newest artificial intelligence blueprints and developments. This news carries on a strategy that the company started more than 3 years ago when it released its Open Compute venture to allow users to share ideas for new components. The server, called Big Sur, was developed particularly to design the latest concepts of AI methods, or deep learning, that simulate the sensory routes discovered in the brain.
The search engine uses this type of AI design to recognize verbal commands, convert terms between different languages, improve Google’s query results and many other projects. Not coincidentally, the company released in November its automatic learning collection TensorFlow in the free Apache 2 certificate.
According to its website, this is the open source program collection for statistical calculations using information flow charts. Nodes in these charts signify statistical functions, while the chart sides show the multidimensional information arrays or tensors conveyed between them. This versatile structure allows people to set up calculations to a single or several CPUs and GPUs in a PC, server and mobile phone with one API.
This is where Facebook’s AI web servers play an important role. The search engine did not launch the components that TensorFlow operates with, and without these components, the software platform is majorly obstructed in its performances. But Facebook’s strategy to free the AI servers resolves that problem.
Big Sur, the program added by the social network to Open Compute venture, contains eight GPU panels, each composed of many processors, but consuming just 300 W of electrical power. It was designed based on Nvidia’s Tesla M40, but is able to support wide varieties of PCI card systems, according to the producer.
GPUs were initially developed to provide pictures for gaming activities and other extremely visual programs, but have been suitable for deep learning too. Normal CPUs are found in these kinds of devices, but it was discovered that sensory techniques are a lot more effective when they move many of the calculations onto GPUs. Actually, GPUs can offer more computational capacity per dollar invested than conventional CPUs can generate.
Image source: Iqworkforce