> IBM anticipates that the first cases of verified quantum advantage will be confirmed by the wider community by the end of 2026.
In 2019, Google claimed quantum supremacy [1]. I'm truly confused about what quantum computing can do today, or what it's likely to be able to do in the next decade.
[1] https://www.nasa.gov/technology/computing/google-and-nasa-ac...
There's legitimately interesting research in using it to accelerate certain calculations. For example, usually you see a few talks at chemistry conferences on how it's gotten marginally faster at (very basic) electronic structure calculations. Also some neat stuff in the optimization space. Stuff you keep your eye on hoping it's useful in 10 years.
The most similar comparison is AI stuff, except even that has found some practical applications. Unlike AI, there isn't really much practicality for quantum computers right now beyond bumping up your h-index
Well, maybe there is one. As a joke with some friends after a particularly bad string of natural 1's in D&D, I used IBM's free tier (IIRC it's 10 minutes per month) and wrote a dice roller to achieve maximum randomness.
that was my understanding too - in the fields of chemistry, materials science, pharmaceutical development, etc... quantum tech is somewhat promising and might be pretty viable in those specific niche fields within the decade.
IBM challenged that the 2019 case could be handled by a supercomputer [1].
The main issue is that these algorithms where today's early quantum computers have an advantage were specifically designed to be demonstration problems. All of the tasks that people previously wanted a quantum computer to do are still impractical with today's hardware.
[1] https://www.quantamagazine.org/google-and-ibm-clash-over-qua...
The trouble with quantum supremacy results is they disappear as soon as you observe them (carefully).
Sorry for that, but seriously, I'd treat this kind of claim like any other putative breakthrough (room-temperature superconductors spring to mind), until it's independently verified it's worthless. The punishment for crying wolf is minimal and by the time you're shown to be bullshitting the headlines have moved on.
The other method, of course, is to just obsessively check Scott Aaronson's blog.
I happen to know IBM made some great hires -- one of my classmates who was excellent in the field, who had impressive quantum computing nature publications before graduation, worked at IBM for the past several years.
Though it looks like he recently switched to working at Google AI...
https://scholar.google.com/citations?user=NaxMJzQAAAAJ&hl=en
I've been bit by the mass marketing nonsense of "Watson" but IBM Research does some pretty good work, and their progress on Quantum Computing seems to be "real"; and certainly more reliable than Microsoft (shocked!).
Sooo... are we factoring 21 without shortcuts yet?
Related Qiskit Tutorial Video[0] "This tutorial covers advanced techniques for implementing the Quantum Approximate Optimization Algorithm (QAOA) at the utility scale using Qiskit. In this video, we walk through how to build, optimize, and run QAOA for real world optimization problems on real IBM Quantum hardware. This series is designed for quantum computing practitioners who are ready to move beyond basic examples and start running large scale, hardware aware algorithms. We explore how to transition from theory to practical execution, covering algorithm development, circuit optimization, hybrid workflows, and best practices for hardware performance. Whether you are expanding your QAOA skills or preparing to run your own research experiments, this tutorial will help you strengthen your understanding of utility scale quantum computing with Qiskit."
"Qiskit capabilities show 24 percent increase in accuracy" what was it before? What good is a computer that is not 100% accurate? Do I have to run a function 1000x to get some average 99% chance the output is correct?
Many classical information processing devices are less than 100% reliable. Wifi (or old school dialup) will drop a non-trivial number of packets. RAM chips have some non-zero amount of unreliability, but in most cases we don't notice [1]. Computer processors in space will similarly fail due to cosmic ray bombardment. In all cases, you mitigate such problems by adding redundancy or error correction.
Quantum computer hardware is similarly very error-prone, and it is unlikely that we will ever build quantum hardware which will have ignorable levels of error. However, people have developed many techniques, often much more sophisticated that in the classical domain, for handling the fragility of quantum hardware. I am not familiar with the details of recent improvements in qiskit, but they are referring to improvements in specific "error mitigation" techniques implemented within qiskit. These techniques will be used in tandem with others methods like error correction to create quantum computers that give you answers with close to but less than 100% chance of success.
As you say, in these cases, you will repeat your simulation a few times and take a majority vote.
(Right now "computers that aren't 100% accurate" are all the rage, even without quantum computing. Though a lot of people are wondering if that's any good, too.)
They're especially good for oracle-type problems, where you can verify an answer much faster than you can find them. NP problems are an especially prominent example of that. If it's wrong, you try again.
In theory it might take a very long time to find the answer. But even if you've only got 25% accuracy, the odds of you being wrong 10 times in a row are only 6%. Being wrong 100 times in a row is a number so small it requires scientific notation (10^-13). It's worth it to be able to solve an otherwise exponential problem.
Quantum computers have error bounds, and you can use that to tune your error rate to being-hit-by-a-cosmic-ray level of acceptability.
It's still far from clear that they can build general-purpose quantum computers big enough to do anything useful. But the built-in error factors are not, in themselves, a bar.
One of my colleagues read a paper about quantum computing techniques to solve complex optimization problems (the domain of complex mixed integer solvers) and tried it out for a financial portfolio optimization, replicating the examples provided by one of the quantum computing companies during a trial period.
The computer *did not* produce the same results each time, and often the results were wrong. The service provider's support staff didn't help -- their response was effectively "oh shucks."
We discontinued considering quantum computing after that. Not suitable for our use-case.
Maybe quantum computing would be applicable if you were trying to crack encryption, wherein getting the right result once is helpful regardless of how many wrong answers you get in the process.
Essentially correct. With a quantum computer you do multiple runs and average the result.