Search Immortality Topics:



PsiQuantum’s Path to 1 Million Qubits by the Middle of the Decade – HPCwire

Posted: April 27, 2022 at 2:15 am

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups thats kept a moderately low PR profile. (Thats if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for NISQ (near-term intermediate scale quantum) computers and set out to develop a million-qubit system the company says will deliver big gains on big problems as soon as it arrives.

When will that be?

PsiQuantum says it will have all the manufacturing processes in place by the middle of the decade and its working closely with GlobalFoundries (GF) to turn its vision into reality. The generous size of its funding suggests many think it will succeed. PsiQuantum is betting on a photonics-based approach called fusion-based quantum computing (paper) that relies mostly on well-understood optical technology but requires extremely precise manufacturing tolerances to scale up. It also relies on managing individual photons, something that has proven difficult for others.

Heres the companys basic contention:

Success in quantum computing will require large, fault-tolerant systems and the current preoccupation with NISQ computers is an interesting but ultimately mistaken path. The most effective and fastest route to practical quantum computing will require leveraging (and innovating) existing semiconductor manufacturing processes and networking thousands of quantum chips together to reach the million-qubit system threshold thats widely regarded as necessary to run game-changing applications in chemistry, banking, and other sectors.

Its not that incrementalism is bad. In fact, its necessary. But its not well served when focused on delivering NISQ systems argues Peter Shadbolt, one of PsiQuantum founders and the current chief scientific officer.

Conventional supercomputers are already really good. Youve got to do some kind of step change, you cant increment your way [forward], and especially you cant increment with five qubits, 10 qubits, 20 qubits, 50 qubits to a million. That is not a good strategy. But its also not true to say that were planning to leap from zero to a million, said Shadbolt. We have a whole chain of incrementally larger and larger systems that were building along the way. Those allow us to validate the control electronics, the systems integration, the cryogenics, the networking, etc. But were not spending time and energy, trying to dress those up as something that theyre not. Were not having to take those things and try to desperately extract computational value from something that doesnt have any computational value. Were able to use those intermediate systems for our own learnings and for our own development.

Thats a much different approach from the majority of quantum computing hopefuls. Shadbolt suggests the broad message about the need to push beyond NISQ dogma is starting to take hold.

There is a change that is happening now, which is that people are starting to program for error-corrected quantum computers, as opposed to programming for NISQ computers. Thats a welcome change and thats happening across the whole space. If youre programming for NISQ computers, you very rapidly get deeply entangled if youll forgive the pun with the hardware. You start looking under the hood, and you start trying to find shortcuts to deal with the fact that you have so few gates at your disposal. So, programming NISQ computers is a fascinating, intellectually stimulating activity, Ive done it myself, but it rapidly becomes sort of siloed and you have to pick a winner, said Shadbolt.

With fault tolerance, once you start to accept that youre going to need error correction, then you can start programming in a fault-tolerant gate set which is hardware agnostic, and its much more straightforward to deal with. There are also some surprising characteristics, which mean that the optimizations that you make to algorithms in a fault-tolerant regime are in many cases, the diametric opposite of the optimizations that you would make in the NISQ regime. It really takes a different approach but its very welcome that the whole industry is moving in that direction and spending less time on these kinds of myopic, narrow efforts, he said.

That sounds a bit harsh. PsiQuantum is no doubt benefitting from the manifold efforts by the young quantum computing ecosystem to tout advances and build traction by promoting NISQ use cases. Theres an old business axiom that says a little hype is often a necessary lubricant to accelerate development of young industries; quantum computing certainly has its share. A bigger question is will PsiQuantum beat rivals to the end-game? IBM has laid out a detailed roadmap and said 2023 is when it will start delivering quantum advantage, using a 1000-qubit system, with plans for eventual million-qubit systems. Intel has trumpeted its CMOS strength to scale up manufacturing its quantum dot qubits. D-Wave has been selling its quantum annealing systems to commercial and government customers for years.

Its really not yet clear which of the qubit technologies semiconductor-based superconducting, trapped ions, neutral atoms, photonics, or something else will prevail and for which applications. Whats not ambiguous is PsiQuantums Go Big or Go Home strategy. Its photonics approach, argues the company, has distinct advantages in manufacturability and scalability, operating environment (less frigid), ease of networking, and error correction. Shadbolt recently talked with HPCwire about the companys approach, technology and progress.

What is fusion-based quantum computing?

Broadly, PsiQuantum uses a form of linear optical quantum computing in which individual photons are used as qubits. Over the past year and a half, the previously stealthy PsiQuantum has issued several papers describing the approach while keeping many details close to the vest (papers listed at end of article). The computation flow is to generate single photons and entangle them. PsiQuantum uses dual rail entangling/encoding for photons. The entangled photons are the qubits and are grouped into what PsiQuantum calls resource states, a group of qubits if you will. Fusion measurements (more below) act as gates. Shadbolt says the operations can be mapped to a standard gate-set to achieve universal, error-corrected, quantum computing.

On-chip components carry out the process. It all sounds quite exotic, in part because it differs from more-widely used matter-based qubit technologies. The figure below taken from a PsiQuantum paper Fusion-based quantum computation issued about a year ago roughly describes the process.

Digging into the details is best served by reading the papers and the company has archived videos exploring its approach on its website. The video below is a good brief summation by Mercedes Gimeno-Segovia, vice president of quantum architecture at PsiQuantum.

Shadbolt also briefly described fusion-based quantum computation (FBQC).

Once youve got single photons, you need to build what we refer to as seed states. Those are pretty small entangled states and can be constructed again using linear optics. So, you take some single photons and send them into an interferometer and together with single photon detection, you can probabilistically generate small entangled states. You can then multiplex those again and basically the task is to get as fast as possible to a large enough, complex enough, appropriately structured, resource state which is ready to then be acted upon by a fusion network. Thats it. You want to kill the photon as fast as possible. You dont want photons living for a long time if you can avoid it. Thats pretty much it, said Shadbolt.

The fusion operators are the smallest simplest piece of the machine. The multiplex, single-photon sources are the biggest, most expensive piece. Everything in the middle is kind of the secret sauce of our architecture, some of that weve put out in that paper and you can see kind of how that works, he said. (At the risk of overkill, another brief description of the system from PsiQuantum is presented at the end of the article.)

One important FBQC advantage, says PsiQuantum, is that the shallow depth of optical circuits make error correction easier. The small entangled states fueling the computation are referred to as resource states. Importantly, their size is independent of code distance used or the computation being performed. This allows them to be generated by a constant number of operations. Since the resource states will be immediately measured after they are created, the total depth of operations is also constant. As a result, errors in the resource states are bounded, which is important for fault-tolerance.

Some of the differences between the PsiQuantums FBQC design and the more familiar MBQC (measurement-based quantum computing) paradigm are shown below.

Another advantage is the operating environment.

Nothing about photons themselves requires cryogenic operation. You can do very high fidelity manipulation and generation of qubits at room temperature, and in fact, you can even detect single photons at room temperature just fine. The efficiency of room temperature single photon detectors, is not good enough for fault tolerance. These room temperature detectors are based on pretty complex semiconductor devices, avalanche photodiodes, and theres no physical reason why you couldnt push those to the necessary efficiency, but it looks really difficult [and] people have been trying for a very long time, said Shadbolt

We use a superconducting single-photon detector, which can achieve the necessary efficiencies without a ton of development. Its worth noting those detectors run in the ballpark of 4 Kelvin. So liquid helium temperature, which is still very cold, but its nowhere near as cold as milli-Kelvin temperatures required for superconducting qubits or some of the competing technologies, said Shadbolt.

This has important implications for control circuit placement as well as for reduced power needed to generate the 4-degree Kelvin environment.

Theres a lot to absorb here and its best done directly from the papers. PsiQuantum, like many other quantum start-ups, was founded by researchers who were already digging into the quantum computing space and theyve shown that PsiQuantums FBQC flavor of linear optical quantum computing will work. While at Bristol, Shadbolt was involved in the first demonstration of running a Variational Quantum Eigensolver (VQE) on a photonic chip.

The biggest challenges for PsiQuantum, he suggests, are developing manufacturing techniques and system architecture around well-known optical technology. The company argues having a Tier-1 fab partner such as GlobalFoundries is decisive.

You can go into infinite detail on the architecture and how all the bits and pieces go together. But the point of optical quantum computing is that the network of components is pretty complicated all sorts of modules and structures and multiplexing strategies, and resource state generation schemes and interferometers, and so on but theyre all just made out of beam splitters, and switches, and single photon sources and detectors. Its kind of like in a conventional CPU, you can go in with a microscope and examine the structure of the cache and the ALU and whatever, but underneath its all just transistors. Its the same kind of story here. The limiting factor in our development is the semiconductor process enablement. The thesis has always been that if you tried to build a quantum computer anywhere other than a high-volume semiconductor manufacturing line, your quantum computer isnt going to work, he said.

Any quantum computer needs millions of qubits. Millions of qubits dont fit on a single chip. So youre talking about heaps of chips, probably billions of components realistically, and they all need to work and they all need to work better than the state of the art. That brings us to the progress, which is, again, rearranging those various components into ever more efficient and complex networks in pretty close analogy with CPU architecture. Its a very key part of our IP, but its not rate limiting and its not terribly expensive to change the network of components on the chip once weve got the manufacturing process. Were continuously moving the needle on that architecture development and weve improved these architectures in terms of their tolerance to loss by more than 150x, [actually] well beyond that. Weve reduced the size of the machine, purely through architectural improvements by many, many orders of magnitude.

The big, expensive, slow pieces of the development are in being able to build high quality components at GlobalFoundries in New York. What weve already done there is to put single photon sources and superconducting nanowire, single photon detectors into that manufacturing process engine. We can build wafers, 300-millimeter wafers, with tens of thousands of components on the wafer, including a full silicon photonics PDK (process design kit), and also a very high performing single photon detector. Thats real progress that brings us closer to being able to build a quantum computer, because that lets us build millions to billions of components.

Shadbolt says real systems will quickly follow development of the manufacturing process. PsiQuantum, like everyone in the quantum computing community, is collaborating closely with potential users. Roughly a week ago, it issued a joint paper with Mercedes-Benz discussing quantum computer simulation of Li-ion chemistry. If the PsiQuantum-GlobalFoundries process is ready around 2025, can a million-qubit system (100 logical qubits) be far behind?

Shadbolt would only say that things will happen quickly once the process has been fully developed. He noted there are three ways to make money with a quantum computer: sell machines, sell time, and sell solutions that come from that machine. I think we were exploring all of the above, he said.

Our customers, which is a growing list at this point pharmaceutical companies, car companies, materials companies, big banks are coming to us to understand what a quantum computer can do for them. To understand that, what we are doing, principally, is fault-tolerant resource counting, said Shadbolt. So that means were taking the algorithm or taking the problem the customer has, working with their technical teams to look under the hood, and understand the technical requirements of solving that problem. We are turning that into the quantum algorithms and sub routines that are appropriate. Were compiling that for the fault-tolerant gate set that will run on top of that fusion network, which by the way is a completely vanilla textbook fault-tolerant gate set.

Stay tuned.

PsiQuantum Papers

Fusion-based quantum computation, https://arxiv.org/abs/2101.09310

Creation of Entangled Photonic States Using Linear Optics, https://arxiv.org/abs/2106.13825

Interleaving: Modular architectures for fault-tolerant photonic quantum computing, https://arxiv.org/abs/2103.08612

Description of PsiQuantums Fusion-Based System from the Interleaving Paper

Useful fault-tolerant quantum computers require very large numbers of physical qubits. Quantum computers are often designed as arrays of static qubits executing gates and measurements. Photonic qubits require a different approach. In photonic fusion-based quantum computing (FBQC), the main hardware components are resource-state generators (RSGs) and fusion devices connected via waveguides and switches. RSGs produce small entangled states of a few photonic qubits, whereas fusion devices perform entangling measurements between different resource states, thereby executing computations. In addition, low-loss photonic delays such as optical fiber can be used as fixed-time quantum memories simultaneously storing thousands of photonic qubits.

Here, we present a modular architecture for FBQC in which these components are combined to form interleaving modules consisting of one RSG with its associated fusion devices and a few fiber delays. Exploiting the multiplicative power of delays, each module can add thousands of physical qubits to the computational Hilbert space. Networks of modules are universal fault-tolerant quantum computers, which we demonstrate using surface codes and lattice surgery as a guiding example. Our numerical analysis shows that in a network of modules containing 1-km-long fiber delays, each RSG can generate four logical distance-35 surface-code qubits while tolerating photon loss rates above 2% in addition to the fiber-delay loss. We illustrate how the combination of interleaving with further uses of non-local fiber connections can reduce the cost of logical operations and facilitate the implementation of unconventional geometries such as periodic boundaries or stellated surface codes. Interleaving applies beyond purely optical architectures, and can also turn many small disconnected matter-qubit devices with transduction to photons into a large-scale quantum computer.

Slides/Figures from various PsiQuantum papers and public presentations

Read more here:
PsiQuantum's Path to 1 Million Qubits by the Middle of the Decade - HPCwire

Recommendation and review posted by Ashlie Lopez