“I think today the message needs to be: yes, unequivocally, worry about this now. Have a plan.” | Amznusa.com

A recent Scott Aaronson blog sums up why major businesses and governments are updating cryptography ASAP.

It also addresses the variety of outdated arguments pessimists are stuck on.

(You can google scottaaronson blog 8329 as I can’t seem to share the link here- I’ll paste it below as well)

His quote:

“I’m so used to delivering a message of, maybe, eventually, someone will need to start thinking about migrating from RSA and Diffie-Hellman and elliptic curve crypto to lattice-based crypto, or other systems that could plausibly withstand quantum attack. I think today that message needs to change. I think today the message needs to be: yes, unequivocally, worry about this now. Have a plan.”

I’ve also thought it is time for the existing coins to start designing and planning, at a minimum. It’s plainly obvious major companies like Apple and Google are already implementing changes, so crypto should be at the forefront of this new era.

This will require major work and there will be impacts and tradeoffs that require consensus.

This is probably why there is such resistance to acknowledging this. I fear we will be left with rushed solutions creating unnecessary disruption, or worse.

Vitalik at least considered an emergency plan. But we should be demanding building/ testing of viable solutions ahead of an emergency.

It will be a full rewrite of how we digitally sign our transactions- not backward compatible- and likely require users to take actions such as transferring to wallets that are signing with the new cryptography.

If you are looking beyond the next 6 months for your portfolio, you’ll need to start considering the impacts and which projects are genuinely prepared, not just claims it will be easy to add in.

Here is the pasted text of the blog

Quantum Computing: Between Hope and Hype

So, back in June the White House announced that UCLA would host a binational US/India workshop, for national security officials from both countries to learn about the current status of quantum computing and post-quantum cryptography. It fell to my friend and colleague Rafail Ostrovsky to organize the workshop, which ended up being held last week. When Rafi invited me to give the opening talk, I knew he’d keep emailing until I said yes. So, on the 3-hour flight to LAX, I wrote the following talk in a spiral notebook, which I then delivered the next morning with no slides. I called it “Quantum Computing: Between Hope and Hype.” I thought Shtetl-Optimized readers might be interested too, since it contains my reflections on a quarter-century in quantum computing, and prognostications on what I expect soon. Enjoy, and let me know what you think!

Quantum Computing: Between Hope and Hype

by Scott Aaronson

September 16, 2024

When Rafi invited me to open this event, it sounded like he wanted big-picture pontification more than technical results, which is just as well, since I’m getting old for the latter. Also, I’m just now getting back into quantum computing after a two-year leave at OpenAI to think about the theoretical foundations of AI safety. Luckily for me, that was a relaxing experience, since not much happened in AI these past two years. [Pause for laughs] So then, did anything happen in quantum computing while I was away?

This, of course, has been an extraordinary time for both quantum computing and AI, and not only because the two fields were mentioned for the first time in an American presidential debate (along with, I think, the problem of immigrants eating pets). But it’s extraordinary for quantum computing and for AI in very different ways. In AI, practice is wildly ahead of theory, and there’s a race for scientific understanding to catch up to where we’ve gotten via the pure scaling of neural nets and the compute and data used to train them. In quantum computing, it’s just the opposite: there’s right now a race for practice to catch up to where theory has been since the mid-1990s.

I started in quantum computing around 1998, which is not quite as long as some people here, but which does cover most of the time since Shor’s algorithm and the rest were discovered. So I can say: this past year or two is the first time I’ve felt like the race to build a scalable fault-tolerant quantum computer is actually underway. Like people are no longer merely giving talks about the race or warming up for the race, but running the race.

Within just the last few weeks, we saw the group at Google announce that they’d used the Kitaev surface code, with distance 7, to encode one logical qubit using 100 or so physical qubits, in superconducting architecture. They got a net gain: their logical qubit stays alive for maybe twice as long as the underlying physical qubits do. And crucially, they find that their logical coherence time increases as they pass to larger codes, with higher distance, on more physical qubits. With superconducting, there are still limits to how many physical qubits you can stuff onto a chip, and eventually you’ll need communication of qubits between chips, which has yet to be demonstrated. But if you could scale Google’s current experiment even to 1500 physical qubits, you’d probably be below the threshold where you could use that as a building block for a future scalable fault-tolerant device.

Then, just last week, a collaboration between Microsoft and Quantinuum announced that, in the trapped-ion architecture, they applied pretty substantial circuits to logically-encoded qubits—-again in a way that gets a net gain in fidelity over not doing error-correction, modulo a debate about whether they’re relying too much on postselection. So, they made a GHZ state, which is basically like a Schrödinger cat, out of 12 logically encoded qubits. They also did a “quantum chemistry simulation,” which had only two logical qubits, but which required three logical non-Clifford gates—which is the hard kind of gate when you’re doing error-correction.

Because of these advances, as well as others—what QuEra is doing with neutral atoms, what PsiQuantum and Xanadu are doing with photonics, etc.—I’m now more optimistic than I’ve ever been that, if things continue at the current rate, either there are useful fault-tolerant QCs in the next decade, or else something surprising happens to stop that. Plausibly we’ll get there not just with one hardware architecture, but with multiple ones, much like the Manhattan Project got a uranium bomb and a plutonium bomb around the same time, so the question will become which one is most economic.

If someone asks me why I’m now so optimistic, the core of the argument is 2-qubit gate fidelities. We’ve known for years that, at least on paper, quantum fault-tolerance becomes a net win (that is, you sustainably correct errors faster than you introduce new ones) once you have physical 2-qubit gates that are ~99.99% reliable. The problem has “merely” been how far we were from that. When I entered the field, in the late 1990s, it would’ve been like a Science or Nature paper to do a 2-qubit gate with 50% fidelity. But then at some point the 50% became 90%, became 95%, became 99%, and within the past year, multiple groups have reported 99.9%. So, if you just plot the log of the infidelity as a function of year and stare at it—yeah, you’d feel pretty optimistic about the next decade too!

Or pessimistic, as the case may be! To any of you who are worried about post-quantum cryptography—by now I’m so used to delivering a message of, maybe, eventually, someone will need to start thinking about migrating from RSA and Diffie-Hellman and elliptic curve crypto to lattice-based crypto, or other systems that could plausibly withstand quantum attack. I think today that message needs to change. I think today the message needs to be: yes, unequivocally, worry about this now. Have a plan.

So, I think this moment is a good one for reflection. We’re used to quantum computing having this air of unreality about it. Like sure, we go to conferences, we prove theorems about these complexity classes like BQP and QMA, the experimenters do little toy demos that don’t scale. But if this will ever be practical at all, then for all we know, not for another 200 years. It feels really different to think of this as something plausibly imminent. So what I want to do for the rest of this talk is to step back and ask, what are the main reasons why people regarded this as not entirely real? And what can we say about those reasons in light of where we are today?

Reason #1

For the general public, maybe the overriding reason not to take QC seriously has just been that it sounded too good to be true. Like, great, you’ll have this magic machine that’s gonna exponentially speed up every problem in optimization and machine learning and finance by trying out every possible solution simultaneously, in different parallel universes. Does it also dice peppers?

For this objection, I’d say that our response hasn’t changed at all in 30 years, and it’s simply, “No, that’s not what it will do and not how it will work.” We should acknowledge that laypeople and journalists and unfortunately even some investors and government officials have been misled by the people whose job it was to explain this stuff to them.

I think it’s important to tell people that the only hope of getting a speedup from a QC is to exploit the way that QM works differently from classical probability theory — in particular, that it involves these numbers called amplitudes, which can be positive, negative, or even complex. With every quantum algorithm, what you’re trying to do is choreograph a pattern of interference where for each wrong answer, the contributions to its amplitude cancel each other out, whereas the contributions to the amplitude of the right answer reinforce each other. The trouble is, it’s only for a few practical problems that we know how to do that in a way that vastly outperforms the best known classical algorithms.

What are those problems? Here, for all the theoretical progress that’s been made in these past decades, I’m going to give the same answer in 2024 that I would’ve given in 1998. Namely, there’s the simulation of chemistry, materials, nuclear physics, or anything else where many-body quantum effects matter. This was Feynman’s original application from 1981, but probably still the most important one commercially. It could plausibly help with batteries, drugs, solar cells, high-temperature superconductors, all kinds of other things, maybe even in the next few years.

And then there’s breaking public-key cryptography, which is not commercially important, but is important for other reasons well-known to everyone here.

And then there’s everything else. For problems in optimization, machine learning, finance, and so on, there’s typically a Grover’s speedup, but that of course is “only” a square root and not an exponential, which means that it will take much longer before it’s relevant in practice. And one of the earliest things we learned in quantum computing theory is that there’s no “black-box” way to beat the Grover speedup. By the way, that’s also relevant to breaking cryptography — other than the subset of cryptography that’s based on abelian groups and can be broken by Shor’s algorithm or the like. The centerpiece of my PhD thesis, twenty years ago, was the theorem that you can’t get more than a Grover-type polynomial speedup for the black-box problem of finding collisions in cryptographic hash functions.

So then what remains? Well, there are all sorts heuristic quantum algorithms for classical optimization and machine learning problems — QAOA (Quantum Approximate Optimization Algorithm), quantum annealing, and so on — and we can hope that sometimes they’ll beat the best classical heuristics for the same problems, but it will be trench warfare, not just magically speeding up everything. There are lots of quantum algorithms somehow inspired by the HHL (Harrow-Hassidim-Lloyd) algorithm for solving linear systems, and we can hope that some of those algorithms will get exponential speedups for end-to-end problems that matter, as opposed to problems of transforming one quantum state to another quantum state. We can of course hope that new quantum algorithms will be discovered. And most of all, we can look for entirely new problem domains, where people hadn’t even considered using quantum computers before—new orchards in which to pick low-hanging fruit. Recently, Shih-Han Hung and I, along with others, have proposed using current QCs to generate cryptographically certified random numbers, which could be used in post-state cryptocurrencies like Ethereum. I’m hopeful that people will find other protocol applications of QC like that one — “proof of quantum work.” [Another major potential protocol application, which Dan Boneh brought up after my talk, is quantum one-shot signatures.]

Anyway, taken together, I don’t think any of this is too good to be true. I think it’s genuinely good and probably true!

Reason #2

A second reason people didn’t take seriously that QC was actually going to happen was the general thesis of technological stagnation, at least in the physical world. You know, maybe in the 40s and 50s, humans built entirely new types of machines, but nowadays what do we do? We issue press releases. We make promises. We argue on social media.

Nowadays, of course, pessimism about technological progress seems hard to square with the revolution that’s happening in AI, another field that spent decades being ridiculed for unfulfilled promises and that’s now fulfilling the promises. I’d also speculate that, to the extent there is technological stagnation, most of it is simply that it’s become really hard to build new infrastructure—high-speed rail, nuclear power plants, futuristic cities—for legal reasons and NIMBY reasons and environmental review reasons and Baumol’s cost disease reasons. But none of that really applies to QC, just like it hasn’t applied so far to AI.

Reason #3

A third reason people didn’t take this seriously was the sense of “It’s been 20 years already, where’s my quantum computer?” QC is often compared to fusion power, another technology that’s “eternally just over the horizon.” (Except, I’m no expert, but there seems to be dramatic progress these days in fusion power too!)

My response to the people who make that complaint was always, like, how much do you know about the history of technology? It took more than a century for heavier-than-air flight to go from correct statements of the basic principle to reality. Universal programmable classical computers surely seemed more fantastical from the standpoint of 1920 than quantum computers seem today, but then a few decades later they were built. Today, AI provides a particularly dramatic example where ideas were proposed a long time ago—neural nets, backpropagation—those ideas were then written off as failures, but no, we now know that the ideas were perfectly sound; it just took a few decades for the scaling of hardware to catch up to the ideas. That’s why this objection never had much purchase by me, even before the dramatic advances in experimental quantum error-correction of the last year or two.

Reason #4

A fourth reason why people didn’t take QC seriously is that, a century after the discovery of QM, some people still harbor doubts about quantum mechanics itself. Either they explicitly doubt it, like Leonid Levin, Roger Penrose, or Gerard ‘t Hooft. Or they say things like, “complex Hilbert space in 2n dimensions is a nice mathematical formalism, but mathematical formalism is not reality”—the kind of thing you say when you want to doubt, but not take full intellectual responsibility for your doubts.

I think the only thing for us to say in response, as quantum computing researchers—and the thing I consistently have said—is man, we welcome that confrontation! Let’s test quantum mechanics in this new regime. And if, instead of building a QC, we have to settle for “merely” overthrowing quantum mechanics and opening up a new era in physics—well then, I guess we’ll have to find some way to live with that.

Reason #5

My final reason why people didn’t take QC seriously is the only technical one I’ll discuss here. Namely, maybe quantum mechanics is fine but fault-tolerant quantum computing is fundamentally “screened off” or “censored” by decoherence or noise—and maybe the theory of quantum fault-tolerance, which seemed to indicate the opposite, makes unjustified assumptions. This has been the position of Gil Kalai, for example.

The challenge for that position has always been to articulate, what is true about the world instead? Can every realistic quantum system be simulated efficiently by a classical computer? If so, how? What is a model of correlated noise that kills QC without also killing scalable classical computing?—which turns out to be a hard problem.

In any case, I think this position has been dealt a severe blow by the Random Circuit Sampling quantum supremacy experiments of the past five years. Scientifically, the most important thing we’ve learned from these experiments is that the fidelity seems to decay exponentially with the number of qubits, but “only” exponentially — as it would if the errors were independent from one gate to the next, precisely as the theory of quantum fault-tolerance assumes. So for anyone who believes this objection, I’d say that the ball is now firmly in their court.

So, if we accept that QC is on the threshold of becoming real, what are the next steps? There are the obvious ones: push forward with building better hardware and using it to demonstrate logical qubits and fault-tolerant operations on them. Continue developing better error-correction methods. Continue looking for new quantum algorithms and new problems for those algorithms to solve.

But there’s also a less obvious decision right now. Namely, do we put everything into fault-tolerant qubits, or do we continue trying to demonstrate quantum advantage in the NISQ (pre-fault-tolerant) era? There’s a case to be made that fault-tolerance will ultimately be needed for scaling, and anything you do without fault-tolerance is some variety of non-scalable circus trick, so we might as well get over the hump now.

But I’d like to advocate putting at least some thought into how to demonstrate a quantum advantage in the near-term. Thay could be via cryptographic protocols, like those that Kahanamoku-Meyer et al. have proposed. It could be via pseudorandom peaked quantum circuits, a recent proposal by me and Yuxuan Zhang—if we can figure out an efficient way to generate the circuits. Or we could try to demonstrate what William Kretschmer, Harry Buhrman, and I have called “quantum information supremacy,” where, instead of computational advantage, you try to do an experiment that directly shows the vastness of Hilbert space, via exponential advantages for quantum communication complexity, for example. I’m optimistic that that might be doable in the very near future, and have been working with Quantinuum to try to do it.

On the one hand, when I started in quantum computing 25 years ago, I reconciled myself to the prospect that I’m going to study what fundamental physics implies about the limits of computation, and maybe I’ll never live to see any of it experimentally tested, and that’s fine. On the other hand, once you tell me that there is a serious prospect of testing it soon, then I become kind of impatient. Some part of me says, let’s do this! Let’s try to achieve forthwith what I’ve always regarded as the #1 application of quantum computers, more important than codebreaking or even quantum simulation: namely, disproving the people who said that scalable quantum computing was impossible.

submitted by /u/Original-Assistant-8
[link] [comments]   A recent Scott Aaronson blog sums up why major businesses and governments are updating cryptography ASAP. It also addresses the variety of outdated arguments pessimists are stuck on. (You can google scottaaronson blog 8329 as I can’t seem to share the link here- I’ll paste it below as well) His quote: “I’m so used to delivering a message of, maybe, eventually, someone will need to start thinking about migrating from RSA and Diffie-Hellman and elliptic curve crypto to lattice-based crypto, or other systems that could plausibly withstand quantum attack. I think today that message needs to change. I think today the message needs to be: yes, unequivocally, worry about this now. Have a plan.” I’ve also thought it is time for the existing coins to start designing and planning, at a minimum. It’s plainly obvious major companies like Apple and Google are already implementing changes, so crypto should be at the forefront of this new era. This will require major work and there will be impacts and tradeoffs that require consensus. This is probably why there is such resistance to acknowledging this. I fear we will be left with rushed solutions creating unnecessary disruption, or worse. Vitalik at least considered an emergency plan. But we should be demanding building/ testing of viable solutions ahead of an emergency. It will be a full rewrite of how we digitally sign our transactions- not backward compatible- and likely require users to take actions such as transferring to wallets that are signing with the new cryptography. If you are looking beyond the next 6 months for your portfolio, you’ll need to start considering the impacts and which projects are genuinely prepared, not just claims it will be easy to add in. Here is the pasted text of the blog Quantum Computing: Between Hope and Hype So, back in June the White House announced that UCLA would host a binational US/India workshop, for national security officials from both countries to learn about the current status of quantum computing and post-quantum cryptography. It fell to my friend and colleague Rafail Ostrovsky to organize the workshop, which ended up being held last week. When Rafi invited me to give the opening talk, I knew he’d keep emailing until I said yes. So, on the 3-hour flight to LAX, I wrote the following talk in a spiral notebook, which I then delivered the next morning with no slides. I called it “Quantum Computing: Between Hope and Hype.” I thought Shtetl-Optimized readers might be interested too, since it contains my reflections on a quarter-century in quantum computing, and prognostications on what I expect soon. Enjoy, and let me know what you think! Quantum Computing: Between Hope and Hype by Scott Aaronson September 16, 2024 When Rafi invited me to open this event, it sounded like he wanted big-picture pontification more than technical results, which is just as well, since I’m getting old for the latter. Also, I’m just now getting back into quantum computing after a two-year leave at OpenAI to think about the theoretical foundations of AI safety. Luckily for me, that was a relaxing experience, since not much happened in AI these past two years. [Pause for laughs] So then, did anything happen in quantum computing while I was away? This, of course, has been an extraordinary time for both quantum computing and AI, and not only because the two fields were mentioned for the first time in an American presidential debate (along with, I think, the problem of immigrants eating pets). But it’s extraordinary for quantum computing and for AI in very different ways. In AI, practice is wildly ahead of theory, and there’s a race for scientific understanding to catch up to where we’ve gotten via the pure scaling of neural nets and the compute and data used to train them. In quantum computing, it’s just the opposite: there’s right now a race for practice to catch up to where theory has been since the mid-1990s. I started in quantum computing around 1998, which is not quite as long as some people here, but which does cover most of the time since Shor’s algorithm and the rest were discovered. So I can say: this past year or two is the first time I’ve felt like the race to build a scalable fault-tolerant quantum computer is actually underway. Like people are no longer merely giving talks about the race or warming up for the race, but running the race. Within just the last few weeks, we saw the group at Google announce that they’d used the Kitaev surface code, with distance 7, to encode one logical qubit using 100 or so physical qubits, in superconducting architecture. They got a net gain: their logical qubit stays alive for maybe twice as long as the underlying physical qubits do. And crucially, they find that their logical coherence time increases as they pass to larger codes, with higher distance, on more physical qubits. With superconducting, there are still limits to how many physical qubits you can stuff onto a chip, and eventually you’ll need communication of qubits between chips, which has yet to be demonstrated. But if you could scale Google’s current experiment even to 1500 physical qubits, you’d probably be below the threshold where you could use that as a building block for a future scalable fault-tolerant device. Then, just last week, a collaboration between Microsoft and Quantinuum announced that, in the trapped-ion architecture, they applied pretty substantial circuits to logically-encoded qubits—-again in a way that gets a net gain in fidelity over not doing error-correction, modulo a debate about whether they’re relying too much on postselection. So, they made a GHZ state, which is basically like a Schrödinger cat, out of 12 logically encoded qubits. They also did a “quantum chemistry simulation,” which had only two logical qubits, but which required three logical non-Clifford gates—which is the hard kind of gate when you’re doing error-correction. Because of these advances, as well as others—what QuEra is doing with neutral atoms, what PsiQuantum and Xanadu are doing with photonics, etc.—I’m now more optimistic than I’ve ever been that, if things continue at the current rate, either there are useful fault-tolerant QCs in the next decade, or else something surprising happens to stop that. Plausibly we’ll get there not just with one hardware architecture, but with multiple ones, much like the Manhattan Project got a uranium bomb and a plutonium bomb around the same time, so the question will become which one is most economic. If someone asks me why I’m now so optimistic, the core of the argument is 2-qubit gate fidelities. We’ve known for years that, at least on paper, quantum fault-tolerance becomes a net win (that is, you sustainably correct errors faster than you introduce new ones) once you have physical 2-qubit gates that are ~99.99% reliable. The problem has “merely” been how far we were from that. When I entered the field, in the late 1990s, it would’ve been like a Science or Nature paper to do a 2-qubit gate with 50% fidelity. But then at some point the 50% became 90%, became 95%, became 99%, and within the past year, multiple groups have reported 99.9%. So, if you just plot the log of the infidelity as a function of year and stare at it—yeah, you’d feel pretty optimistic about the next decade too! Or pessimistic, as the case may be! To any of you who are worried about post-quantum cryptography—by now I’m so used to delivering a message of, maybe, eventually, someone will need to start thinking about migrating from RSA and Diffie-Hellman and elliptic curve crypto to lattice-based crypto, or other systems that could plausibly withstand quantum attack. I think today that message needs to change. I think today the message needs to be: yes, unequivocally, worry about this now. Have a plan. So, I think this moment is a good one for reflection. We’re used to quantum computing having this air of unreality about it. Like sure, we go to conferences, we prove theorems about these complexity classes like BQP and QMA, the experimenters do little toy demos that don’t scale. But if this will ever be practical at all, then for all we know, not for another 200 years. It feels really different to think of this as something plausibly imminent. So what I want to do for the rest of this talk is to step back and ask, what are the main reasons why people regarded this as not entirely real? And what can we say about those reasons in light of where we are today? Reason #1 For the general public, maybe the overriding reason not to take QC seriously has just been that it sounded too good to be true. Like, great, you’ll have this magic machine that’s gonna exponentially speed up every problem in optimization and machine learning and finance by trying out every possible solution simultaneously, in different parallel universes. Does it also dice peppers? For this objection, I’d say that our response hasn’t changed at all in 30 years, and it’s simply, “No, that’s not what it will do and not how it will work.” We should acknowledge that laypeople and journalists and unfortunately even some investors and government officials have been misled by the people whose job it was to explain this stuff to them. I think it’s important to tell people that the only hope of getting a speedup from a QC is to exploit the way that QM works differently from classical probability theory — in particular, that it involves these numbers called amplitudes, which can be positive, negative, or even complex. With every quantum algorithm, what you’re trying to do is choreograph a pattern of interference where for each wrong answer, the contributions to its amplitude cancel each other out, whereas the contributions to the amplitude of the right answer reinforce each other. The trouble is, it’s only for a few practical problems that we know how to do that in a way that vastly outperforms the best known classical algorithms. What are those problems? Here, for all the theoretical progress that’s been made in these past decades, I’m going to give the same answer in 2024 that I would’ve given in 1998. Namely, there’s the simulation of chemistry, materials, nuclear physics, or anything else where many-body quantum effects matter. This was Feynman’s original application from 1981, but probably still the most important one commercially. It could plausibly help with batteries, drugs, solar cells, high-temperature superconductors, all kinds of other things, maybe even in the next few years. And then there’s breaking public-key cryptography, which is not commercially important, but is important for other reasons well-known to everyone here. And then there’s everything else. For problems in optimization, machine learning, finance, and so on, there’s typically a Grover’s speedup, but that of course is “only” a square root and not an exponential, which means that it will take much longer before it’s relevant in practice. And one of the earliest things we learned in quantum computing theory is that there’s no “black-box” way to beat the Grover speedup. By the way, that’s also relevant to breaking cryptography — other than the subset of cryptography that’s based on abelian groups and can be broken by Shor’s algorithm or the like. The centerpiece of my PhD thesis, twenty years ago, was the theorem that you can’t get more than a Grover-type polynomial speedup for the black-box problem of finding collisions in cryptographic hash functions. So then what remains? Well, there are all sorts heuristic quantum algorithms for classical optimization and machine learning problems — QAOA (Quantum Approximate Optimization Algorithm), quantum annealing, and so on — and we can hope that sometimes they’ll beat the best classical heuristics for the same problems, but it will be trench warfare, not just magically speeding up everything. There are lots of quantum algorithms somehow inspired by the HHL (Harrow-Hassidim-Lloyd) algorithm for solving linear systems, and we can hope that some of those algorithms will get exponential speedups for end-to-end problems that matter, as opposed to problems of transforming one quantum state to another quantum state. We can of course hope that new quantum algorithms will be discovered. And most of all, we can look for entirely new problem domains, where people hadn’t even considered using quantum computers before—new orchards in which to pick low-hanging fruit. Recently, Shih-Han Hung and I, along with others, have proposed using current QCs to generate cryptographically certified random numbers, which could be used in post-state cryptocurrencies like Ethereum. I’m hopeful that people will find other protocol applications of QC like that one — “proof of quantum work.” [Another major potential protocol application, which Dan Boneh brought up after my talk, is quantum one-shot signatures.] Anyway, taken together, I don’t think any of this is too good to be true. I think it’s genuinely good and probably true! Reason #2 A second reason people didn’t take seriously that QC was actually going to happen was the general thesis of technological stagnation, at least in the physical world. You know, maybe in the 40s and 50s, humans built entirely new types of machines, but nowadays what do we do? We issue press releases. We make promises. We argue on social media. Nowadays, of course, pessimism about technological progress seems hard to square with the revolution that’s happening in AI, another field that spent decades being ridiculed for unfulfilled promises and that’s now fulfilling the promises. I’d also speculate that, to the extent there is technological stagnation, most of it is simply that it’s become really hard to build new infrastructure—high-speed rail, nuclear power plants, futuristic cities—for legal reasons and NIMBY reasons and environmental review reasons and Baumol’s cost disease reasons. But none of that really applies to QC, just like it hasn’t applied so far to AI. Reason #3 A third reason people didn’t take this seriously was the sense of “It’s been 20 years already, where’s my quantum computer?” QC is often compared to fusion power, another technology that’s “eternally just over the horizon.” (Except, I’m no expert, but there seems to be dramatic progress these days in fusion power too!) My response to the people who make that complaint was always, like, how much do you know about the history of technology? It took more than a century for heavier-than-air flight to go from correct statements of the basic principle to reality. Universal programmable classical computers surely seemed more fantastical from the standpoint of 1920 than quantum computers seem today, but then a few decades later they were built. Today, AI provides a particularly dramatic example where ideas were proposed a long time ago—neural nets, backpropagation—those ideas were then written off as failures, but no, we now know that the ideas were perfectly sound; it just took a few decades for the scaling of hardware to catch up to the ideas. That’s why this objection never had much purchase by me, even before the dramatic advances in experimental quantum error-correction of the last year or two. Reason #4 A fourth reason why people didn’t take QC seriously is that, a century after the discovery of QM, some people still harbor doubts about quantum mechanics itself. Either they explicitly doubt it, like Leonid Levin, Roger Penrose, or Gerard ‘t Hooft. Or they say things like, “complex Hilbert space in 2n dimensions is a nice mathematical formalism, but mathematical formalism is not reality”—the kind of thing you say when you want to doubt, but not take full intellectual responsibility for your doubts. I think the only thing for us to say in response, as quantum computing researchers—and the thing I consistently have said—is man, we welcome that confrontation! Let’s test quantum mechanics in this new regime. And if, instead of building a QC, we have to settle for “merely” overthrowing quantum mechanics and opening up a new era in physics—well then, I guess we’ll have to find some way to live with that. Reason #5 My final reason why people didn’t take QC seriously is the only technical one I’ll discuss here. Namely, maybe quantum mechanics is fine but fault-tolerant quantum computing is fundamentally “screened off” or “censored” by decoherence or noise—and maybe the theory of quantum fault-tolerance, which seemed to indicate the opposite, makes unjustified assumptions. This has been the position of Gil Kalai, for example. The challenge for that position has always been to articulate, what is true about the world instead? Can every realistic quantum system be simulated efficiently by a classical computer? If so, how? What is a model of correlated noise that kills QC without also killing scalable classical computing?—which turns out to be a hard problem. In any case, I think this position has been dealt a severe blow by the Random Circuit Sampling quantum supremacy experiments of the past five years. Scientifically, the most important thing we’ve learned from these experiments is that the fidelity seems to decay exponentially with the number of qubits, but “only” exponentially — as it would if the errors were independent from one gate to the next, precisely as the theory of quantum fault-tolerance assumes. So for anyone who believes this objection, I’d say that the ball is now firmly in their court. So, if we accept that QC is on the threshold of becoming real, what are the next steps? There are the obvious ones: push forward with building better hardware and using it to demonstrate logical qubits and fault-tolerant operations on them. Continue developing better error-correction methods. Continue looking for new quantum algorithms and new problems for those algorithms to solve. But there’s also a less obvious decision right now. Namely, do we put everything into fault-tolerant qubits, or do we continue trying to demonstrate quantum advantage in the NISQ (pre-fault-tolerant) era? There’s a case to be made that fault-tolerance will ultimately be needed for scaling, and anything you do without fault-tolerance is some variety of non-scalable circus trick, so we might as well get over the hump now. But I’d like to advocate putting at least some thought into how to demonstrate a quantum advantage in the near-term. Thay could be via cryptographic protocols, like those that Kahanamoku-Meyer et al. have proposed. It could be via pseudorandom peaked quantum circuits, a recent proposal by me and Yuxuan Zhang—if we can figure out an efficient way to generate the circuits. Or we could try to demonstrate what William Kretschmer, Harry Buhrman, and I have called “quantum information supremacy,” where, instead of computational advantage, you try to do an experiment that directly shows the vastness of Hilbert space, via exponential advantages for quantum communication complexity, for example. I’m optimistic that that might be doable in the very near future, and have been working with Quantinuum to try to do it. On the one hand, when I started in quantum computing 25 years ago, I reconciled myself to the prospect that I’m going to study what fundamental physics implies about the limits of computation, and maybe I’ll never live to see any of it experimentally tested, and that’s fine. On the other hand, once you tell me that there is a serious prospect of testing it soon, then I become kind of impatient. Some part of me says, let’s do this! Let’s try to achieve forthwith what I’ve always regarded as the #1 application of quantum computers, more important than codebreaking or even quantum simulation: namely, disproving the people who said that scalable quantum computing was impossible. submitted by /u/Original-Assistant-8 [link] [comments] 

Amazon.com: Its Role in the U.S. Industry

Introduction

Amazon.com, often simply referred to as Amazon, has transformed from an online bookstore into a colossal global conglomerate that has fundamentally altered the landscape of retail, technology, and numerous other industries. Founded in 1994 by Jeff Bezos, Amazon’s rapid growth and diversification have made it a central player in the U.S. economy. Its impact is felt across various sectors, including e-commerce, logistics, cloud computing, entertainment, and even artificial intelligence. This article explores Amazon’s role in the U.S. industry, examining its influence, achievements, and the challenges it faces.

The Rise of Amazon.com

Origins and Early Growth

Amazon began as an online bookstore in Bezos’s garage in Bellevue, Washington. The vision was simple but revolutionary: to create an online platform where people could buy books. Bezos recognized the potential of the internet to disrupt traditional retail and chose books as his starting point due to their wide appeal and ease of distribution. By offering a vast selection and competitive prices, Amazon quickly gained a foothold in the market.

The company’s initial public offering (IPO) in 1997 marked the beginning of its journey toward becoming a dominant player in the U.S. and global markets. Amazon’s early success was driven by its focus on customer satisfaction, an extensive inventory, and a commitment to fast and reliable delivery. This focus laid the foundation for its expansion into other product categories and services.

Diversification and Expansion

Amazon’s diversification strategy began with the introduction of new product categories, such as electronics, toys, and apparel. This move positioned Amazon as a one-stop shop for consumers, significantly expanding its customer base. The company also introduced its marketplace platform, allowing third-party sellers to offer their products alongside Amazon’s own inventory. This not only increased the variety of products available but also created a new revenue stream for the company through commissions on sales.

One of the most significant milestones in Amazon’s diversification was the launch of Amazon Web Services (AWS) in 2006. AWS provided cloud computing services to businesses, enabling them to rent computing power and storage rather than investing in expensive infrastructure. This service quickly became a cornerstone of Amazon’s business, contributing significantly to its profitability and establishing Amazon as a leader in the tech industry.

Amazon’s Role in E-Commerce

Transforming Retail

Amazon’s impact on the retail industry cannot be overstated. It has revolutionized the way consumers shop, shifting much of retail activity from brick-and-mortar stores to online platforms. The convenience of shopping from home, coupled with Amazon’s vast selection and competitive pricing, has led to a fundamental change in consumer behavior. This shift has forced traditional retailers to adapt, leading to the rise of omnichannel strategies that integrate online and offline sales.

The concept of “one-click shopping,” patented by Amazon in 1999, further streamlined the online shopping experience. This innovation reduced the friction in the purchasing process, contributing to higher conversion rates and reinforcing Amazon’s dominance in e-commerce. Additionally, Amazon Prime, launched in 2005, offered customers free two-day shipping and other benefits for an annual fee, further solidifying customer loyalty and increasing the frequency of purchases.

Impact on Small Businesses

While Amazon has provided opportunities for small businesses through its marketplace platform, it has also posed challenges. On the one hand, small businesses gain access to a vast customer base and the logistics infrastructure that Amazon offers. On the other hand, they face intense competition, not only from other third-party sellers but also from Amazon itself. The company’s ability to undercut prices and its control over the marketplace platform have led to concerns about fairness and market power.

Moreover, Amazon’s algorithms and data-driven approach to retail have raised questions about the transparency of how products are promoted and priced on the platform. Small businesses often struggle to achieve visibility without spending on Amazon’s advertising services, which can be costly. Despite these challenges, many small businesses continue to rely on Amazon as a vital sales channel, underscoring its central role in the U.S. retail industry.

Amazon in the Logistics and Supply Chain Industry

Revolutionizing Logistics

Amazon’s impact extends beyond retail into logistics and supply chain management. To fulfill its promise of fast and reliable delivery, Amazon has invested heavily in building a sophisticated logistics network. This network includes a vast network of fulfillment centers, advanced robotics, and a growing fleet of delivery vehicles, including drones.

Amazon’s logistics capabilities have set new standards for the industry. The company has pushed the boundaries of what is possible in terms of speed and efficiency, challenging traditional logistics providers like FedEx and UPS. Amazon’s commitment to customer satisfaction has driven innovations such as same-day and even one-hour delivery in select areas, further raising consumer expectations.

In-House Logistics Services

In recent years, Amazon has taken steps to reduce its reliance on third-party logistics providers by expanding its in-house delivery capabilities. The launch of Amazon Logistics, a service that uses independent contractors to deliver packages, is a testament to this strategy. This move has enabled Amazon to exert greater control over the delivery process and reduce costs.

However, this expansion has not been without controversy. Amazon’s use of independent contractors has sparked debates about labor practices and the gig economy. Critics argue that Amazon’s business model places financial and physical burdens on its delivery drivers, who are often classified as independent contractors rather than employees. This classification exempts Amazon from providing benefits and protections typically afforded to employees, such as health insurance and minimum wage guarantees.

Amazon Web Services: The Backbone of the Internet

Dominating Cloud Computing

Amazon Web Services (AWS) has emerged as one of the most significant contributors to Amazon’s success. As the leading provider of cloud computing services, AWS powers a vast portion of the internet, supporting everything from startups to large enterprises. Its services include computing power, storage, databases, machine learning, and more.

AWS’s dominance in cloud computing has had a profound impact on the tech industry. By providing scalable and cost-effective solutions, AWS has lowered the barriers to entry for new businesses, fostering innovation and entrepreneurship. Companies no longer need to invest heavily in physical infrastructure; instead, they can rent the necessary resources on demand from AWS.

Economic Impact and Innovation

The success of AWS has not only boosted Amazon’s financial performance but also contributed to the broader U.S. economy. AWS has created jobs, driven innovation, and supported the growth of numerous tech companies. Its services have become integral to the operations of many businesses, from streaming services like Netflix to financial institutions and government agencies.

AWS’s role in advancing technologies such as artificial intelligence and machine learning has also been significant. By making these technologies accessible through cloud services, AWS has enabled companies to develop new applications and services that were previously out of reach. This has spurred growth in sectors such as healthcare, finance, and entertainment.

Amazon’s Influence on Entertainment and Media

Amazon Studios and Prime Video

Amazon’s foray into the entertainment industry began with the launch of Amazon Studios and Prime Video. These platforms have become key players in the streaming wars, competing with giants like Netflix, Disney+, and HBO Max. Amazon Studios produces original content, including critically acclaimed series like The Marvelous Mrs. Maisel and The Boys, as well as feature films.

Prime Video, available as part of the Amazon Prime membership, has become a major driver of subscriber growth. By offering a mix of original content and licensed programming, Amazon has been able to attract a diverse audience. The company’s investment in high-quality content has not only boosted its streaming service but also positioned it as a significant player in Hollywood.

Impact on the Publishing Industry

Amazon’s origins as an online bookstore continue to influence the publishing industry. The company has become the largest bookseller in the world, both in physical books and e-books. The Kindle, Amazon’s e-reader, revolutionized the way people consume books, making digital reading mainstream.

However, Amazon’s dominance in the book market has raised concerns among publishers and authors. The company’s pricing strategies and negotiation tactics have led to disputes over revenue sharing and control. Amazon’s influence over the publishing industry extends to self-publishing, where its Kindle Direct Publishing platform allows authors to bypass traditional publishers and reach readers directly. While this has democratized publishing, it has also led to an oversaturation of the market and challenges in quality control.

Challenges and Criticisms

Regulatory Scrutiny

Amazon’s immense size and influence have made it a target for regulatory scrutiny. In the U.S. and abroad, lawmakers and regulators have raised concerns about the company’s market power, labor practices, and treatment of third-party sellers. Antitrust investigations have been launched to determine whether Amazon engages in anti-competitive behavior, such as favoring its own products over those of third-party sellers on its platform.

The company’s expansion into various industries has also led to concerns about its dominance and potential to stifle competition. Critics argue that Amazon’s control over data, logistics, and retail gives it an unfair advantage, making it difficult for smaller companies to compete. In response, there have been calls for greater regulation and even the potential breakup of Amazon into smaller entities.

Labor Practices and Workers’ Rights

Amazon’s labor practices have come under intense scrutiny, particularly in its fulfillment centers and delivery network. Reports of grueling working conditions, high injury rates, and inadequate breaks have sparked widespread criticism. Workers have organized protests and strikes, demanding better pay, safer working conditions, and the right to unionize.

The company’s use of technology to monitor and manage workers has also raised ethical concerns. Amazon’s reliance on algorithms to track productivity and enforce performance targets has been criticized for creating a dehumanizing work environment. The company’s resistance to unionization efforts has further fueled debates about workers’ rights and corporate responsibility.

Environmental Impact

As one of the largest companies in the world, Amazon’s environmental impact is significant. The company’s vast logistics network and rapid delivery services contribute to carbon emissions and packaging waste. Amazon has faced criticism for its role in driving consumerism and its contribution to environmental degradation.

In response, Amazon has pledged to become more sustainable. The company launched the Climate Pledge in 2019, committing to reach net-zero carbon emissions by 2040. Amazon has also invested in renewable energy, electric delivery vehicles, and sustainable packaging. While these efforts are a step in the right direction, critics argue that more needs to be done to address the environmental impact of the company’s operations.