Optical processor points the way for warpage coefficient calculations | Science and Technology Magazine

2021-11-25 11:57:57 By : Ms. Jasmine H

Image source: Oxford University

Computing at the speed of light has been around for a long time, but the new generation of optical processors is expected to be faster and cooler than existing electronic processors.

For decades, data has been sent over the wide area network in the form of light pulses, but optical (or photon) computing has been slow to deal with the challenge of moving data in the form of light at the processor level: it turns out that photons have far more impact on traffic than electronic. Although the speed of traditional data processing is accelerating year by year, technicians seem to have no incentive to solve optical problems.

However, it is now generally acknowledged that the computing performance improvement of the traditional processor architecture has reached a deadlock. To make matters worse, when advanced applications in artificial intelligence and quantum fields require more—more—computing power to pay for them, the physical limit on the number of cores that can be packed into traditional ICs is being reached.

Of course, Moore's Law can be changed to a certain extent through parallel processing. Parallel processing divides and runs computing tasks on multiple microprocessors at the same time. But parallel processing also has its limitations and introduces additional layers of complexity to already very complex workloads.

In addition, traditional, core-intensive computing requires hundreds of terawatts of precious power and generates a lot of dirty heat. Therefore, when analysts like ReportLinker announce that optical processors can reduce power consumption in certain critical applications by at least 50%, the IT industry is bound to take notice. If this number proves feasible, the possibility of optical processor-based platforms will be very attractive to data center operators keen to provide private cloud customers with the ultimate (and cleaner) computing power.

Therefore, the recent re-emergence of academic researchers, existing companies in the industry, and all-optical computing innovations that want to become start-ups have a good reason-everyone believes that optical processing will have a major transformative impact on IT in the next six months --ten years. The market opportunity is definitely there. Forecasters believe: According to analyst ReportLinker again (December 2020), by 2025, the global photonic processor market is expected to be worth US$21.3 million (£14.29 million)-annual growth rate (CAGR) This is 28.3% during the forecast period.

Optical advocates believe that although performance improvements are generally considered to be the main driving factors supporting optical processing, reducing power consumption and heat are also key factors. Lightmatter CEO Nick Harris stated that the saturation of Dennard Scaling—the law states that although transistors may become smaller, their power consumption will not decrease—resulting in the current generation of high The performance electronic IC reaches the cooling limit. -up announced its AI photonic processor in August 2020 and launched Lightmatter Passage two months later, a wafer-level programmable photonic interconnect that allows heterogeneous chip arrays to communicate.

"In order to continue to drive the computational growth required to develop and execute the most advanced neural network models, artificial intelligence applications need to increase computational speed and energy efficiency," Harris said. Optical computing is now "the only solution that can support the pace of artificial intelligence innovation needed to realize the investment."

At Optalysys, an optical coprocessor expert, Dr. Nick New, CEO and founder, agreed. He cited the example of OpenAI's Generative Pre-trained Transformer 3 (GPT-3), which is an autoregressive language model that uses deep learning to generate human-like text.

"GPT-3 has 175 billion parameters—two orders of magnitude more than its predecessor, GPT-2," New explained. "Running such a model on dedicated hardware (such as Google’s TPU 3.0 AI ASIC) requires water cooling in its [285,000] 300W CPU cores. Without a fundamentally different data processing method, we can only choose to use more The core, more cooling-in the end, more power is used. That is not feasible."

New believes that the IT market must recognize that the relevance of Moore's Law (the number of transistors in dense integrated circuits doubles approximately every two years) has passed, and photon processing requires a different way to understand the supply of computing power. However, this does not mean that The development and progress of optical/photonic processors will be lawless.

"Optical calculations will be governed by multiple'laws', covering not only the physical size of the modulator, but also the efficiency of photoelectric and analog-to-digital signal conversion and photodetector sensitivity," New predicts.

According to Harris of Lightmatter, optical computing technology focuses on the inference process in the context of artificial intelligence, with two core performance indicators: inferences per second (IPS) and inferences per watt per second (IPSW). He said: "The photon calculation law equivalent to Moore's Law will point out that IPS and IPSW will [from now on] double every two years-this is indeed the roadmap goal of my team."

Harris added: "The size of the optics is unlikely to be significantly reduced in the area of ​​each component, but the performance expansion of the photonic computer may be significantly expanded. This is because the number of colors in the photonic computer-that is, the wavelength can be increased. -It can be processed by the computing core at the same time, as well as the clock frequency."

Optalysys launched its Fourier engine photonic coprocessor FT:X 2000 in March 2019. It relies on the Fourier transform, which is a mathematical transformation that can decompose a function that depends on space or time into a function that depends on space or time frequency. The term refers to both frequency domain representations and mathematical operations that associate frequency domain representations with space or time functions.

"Optalysys' integrated silicon photonics coprocessor has optical circuits built on a single piece of silicon," the company's New said. "It uses the interference properties of light to perform Fourier transforms at speeds and power consumption that were not possible before."

Lightmatter's method is based on its so-called programmable nanophotonic processor architecture, which is an optical processor implemented in silicon photonics that can perform matrix transformation on light. This relies on a two-dimensional Mach-Zehnder interferometer (MZI) array fabricated in a silicon photonics process. The Mach-Zehnder interferometer is a device used to determine the relative phase shift change between two collimated beams obtained by separating light from a single light source.

"In order to achieve the N×N matrix product, our method requires N² MZI-the same number of calculation elements as the systolic MAC [multiply-accumulate] array," Harris explained. "Mathematically, each MZI performs a 2×2 matrix-vector product. The entire grid of MZIs together multiplies the N×N matrix by the N-element vector. The calculation is done when light propagates from the input to the output of the MZI array during flight time , The optical signal is about 100 picoseconds-shorter than a single clock cycle of most computers."

At the beginning of this year, international research teams from the University of Münster, University of Oxford, University of Exeter, University of Pittsburgh, Federal Institute of Technology Lausanne and the IBM Zurich Research Institute jointly announced the development of an accelerator IC architecture that combines integrated photonic devices with phase change. . Materials (PCM) to provide matrix vector (MV) multiplication-calculations are considered essential for artificial intelligence and machine learning applications.

The team developed an "integrated phase-change photonic co-processor"-or photonic processing unit (PPU) for short. This is a new type of photon tensor processing unit (Google's artificial intelligence accelerator ASIC developed for neural network machine learning) that can perform multiple MV multiplications in parallel at the same time, using chip-based frequency combs as light sources, and wavelength division multiplexing (By using lasers of different wavelengths to multiplex multiple optical carrier signals onto a single optical fiber).

The matrix elements are stored using phase change materials-the same materials used for rewritable DVDs and Blu-ray discs-so that the matrix state can be maintained without the need for energy supply.

"In terms of differentiation, I think the bandwidth provided by PPU is much wider than the'mainstream' photonics method that relies on optical phase manipulation and requires a coherent light source," explained Professor C David Wright of the University of Exeter. "This method uses non-volatile PCM, allowing our device to act as both memory and processor. The matrix elements are stored directly in the device that performs matrix-vector multiplication-no separate memory is required."

In their experiment, the team used PPU in a so-called convolutional neural network to recognize handwritten digits and perform image filtering. The PPU project claims to be the first project to apply frequency combs in the field of artificial neural networks. Harish Bhaskaran, a team member of Wright Professor of Applied Nanomaterials at Oxford University, said that PPU can have a wide range of applications. "For example, it can quickly and efficiently process a large number of data sets for medical diagnosis, such as data sets from CT, MRI, and PET scanners."

In 2019, researchers at the Massachusetts Institute of Technology's Electronic Research Laboratory began to develop optical accelerator chips for optical neural networks. Their prototype is more efficient than electronic processors, but relies on bulky optical components, which will limit its use in relatively small networks.

Since then, the same team described a follow-up project of an optical accelerator based on more compact optical components and optical signal processing technology. MIT claims that it can scale to neural networks much larger than the equivalent electronic processors can handle.

The processor does not rely on matrix multiplication using the Mach Zehnder interferometer (which MIT says it imposes scaling restrictions), but on a more compact and energy-efficient optoelectronic solution that uses optical signals to encode data. But use "balanced homodyne detection".

Another research team in the Quantum Engineering Technology Laboratory of the University of Bristol focuses on the potential of optical processing in quantum computing applications. "One of the key challenges limiting the scale of integrated quantum photonics is the lack of a processor source capable of producing high-quality single photons," explained the project leader, Dr. Stefano Paesani.

"Without a low-noise photon source, when the circuit complexity [increases], errors in quantum calculations will accumulate quickly-making the calculations no longer reliable. In addition, the light loss in the light source limits the number of photons that a quantum computer can generate and process ."

The Bristol team worked with the University of Trento in Italy to develop a technology to solve this problem, and in the process developed what is said to be the first integrated photon source compatible with large-scale quantum photonics: a A technology called "intermodality" spontaneously four-wave mixing'. In this case, the multiple modes of light propagating through the silicon waveguide interfere nonlinearly.

"This creates ideal conditions for the production of single photons," said Dr. Paesani. "In addition, the device is manufactured using a CMOS compatible process in a commercial silicon foundry, which means that thousands of sources can be easily integrated on a single device."

In Japan in 2019, a research team from the NTT Basic Research Laboratory built PAXEL (Photon Accelerator), which is an electro-optical modulator that runs at 40Gbps, but consumes only 42 Ajoules per bit. Next, the researchers built an optical receiver (optical-to-electrical or OE converter) based on the same technology, which can operate at 10Gbps, while power consumption is two orders of magnitude lower than other optical systems, only 1.6 femtofocus /Bit. OE does not require an amplifier (which reduces power requirements), and has a low capacitance of only a few femtofarads.

The NTT team combined EO and OE and demonstrated what they claimed was the world's first "OEO transistor." It can be used as an all-optical switch, wavelength converter and repeater. This development benefited from the invention of a new type of photonic "crystal" (a synthetic insulating material that controls light).

This is a piece of silicon with three drilled nanocavities (holes) with a length of 1.3 microns. These nanocavities (holes) are arranged in such a way that if light passes through them, it will interfere with itself, causing it to be cancelled out. If a row of nanocavities is blocked, the light will travel along the path and collect in the light-absorbing material, which is then converted into electric current. The same system works in reverse.

Sign up for E&T news email and send such wonderful stories to your inbox every day.

© 2021 Engineering Technology Society. The College of Engineering and Technology is registered as a charity in England and Wales (No. 211014) and Scotland (No. SC038698).