• raseliarison
  • nirinA
  • adrien
  • blog
  • code
  • FAQ
  •  home  
  •  news  
    • arXiv
      • astro-ph
      • cond-mat
      • cs
      • eess
      • gr-qc
      • hep-ex
      • hep-lat
      • hep-ph
      • hep-th
      • math
      • math-ph
      • nlin
      • nucl-ex
      • nucl-th
      • physics
      • q-bio
      • quant-ph
      • stat
    • physics
      • phys.org
      • physics world
    • linux
      • kernel
      • slackware
    • nature
      • natcomputsci
      • natastron
      • natbiomedeng
      • nenergy
      • nnano
      • natmachintell
      • nbt
      • nmeth
      • natecolevol
      • nmicrobiol
      • ng
      • nchembio
      • natelectron
      • micronano
      • nphoton
    • bioRxiv
    • plos one
    • world
      • BBC
      • Al Jazeera
    • earth
      • earth observatory
      • weather
      • weather forecast
    • universe
      • apod
      • hubble
      • atel
      • nasa
  •  wiki  
  •  gemini  
  • Physics World

    Motion through quantum space–time is traced by ‘q-desics’

    Subtle quantum effects could be observed in how particles traverse cosmological distances

    The post Motion through quantum space–time is traced by ‘q-desics’ appeared first on Physics World.

    Physicists searching for signs of quantum gravity have long faced a frustrating problem. Even if gravity does have a quantum nature, its effects are expected to show up only at extremely small distances, far beyond the reach of experiments. A new theoretical study by Benjamin Koch and colleagues at the Technical University of Vienna in Austria suggests a different strategy. Instead of looking for quantum gravity where space–time is tiny, the researchers argue that subtle quantum effects could influence how particles and light move across huge cosmical distances.

    Their work introduces a new concept called q-desics, short for quantum-corrected paths through space–time. These paths generalize the familiar trajectories predicted by Einstein’s general theory of relativity and could, in principle, leave observable fingerprints in cosmology and astrophysics.

    General relativity and quantum mechanics are two of the most successful theories in physics, yet they describe nature in radically different ways. General relativity treats gravity as the smooth curvature of space–time, while quantum mechanics governs the probabilistic behavior of particles and fields. Reconciling the two has been one of the central challenges of theoretical physics for decades.

    “One side of the problem is that one has to come up with a mathematical framework that unifies quantum mechanics and general relativity in a single consistent theory,” Koch explains. “Over many decades, numerous attempts have been made by some of the most brilliant minds humanity has to offer.” Despite this effort, no approach has yet gained universal acceptance.

    Deeper difficulty

    There is another, perhaps deeper difficulty. “We have little to no guidance, neither from experiments nor from observations that could tell us whether we actually are heading in the right direction or not,” Koch says. Without experimental clues, many ideas about quantum gravity remain largely speculative.

    That does not mean the quest lacks value. Fundamental research often pays off in unexpected ways. “We rarely know what to expect behind the next tree in the jungle of knowledge,” Koch says. “We only can look back and realize that some of the previously explored trees provided treasures of great use and others just helped us to understand things a little better.”

    Almost every test of general relativity relies on a simple assumption. Light rays and freely falling particles follow specific paths, known as geodesics, determined entirely by the geometry of space–time. From gravitational lensing to planetary motion, this idea underpins how physicists interpret astronomical data.

    Koch and his collaborators asked what happens to this assumption when space–time itself is treated as a quantum object. “Almost all interpretations of observational astrophysical and astronomical data rest on the assumption that in empty space light and particles travel on a path which is described by the geodesic equation,” Koch says. “We have shown that in the context of quantum gravity this equation has to be generalized.”

    Generalized q-desic

    The result is the q-desic equation. Instead of relying only on an averaged, classical picture of space–time, q-desics account for the underlying quantum structure more directly. In practical terms, this means that particles may follow paths that deviate slightly from those predicted by classical general relativity, even when space–time looks smooth on average.

    Crucially, the team found that these deviations are not confined to tiny distances. “What makes our first results on the q-desics so interesting is that apart from these short distance effects, there are also long range effects possible, if one takes into account the existence of the cosmological constant,” Koch says.

    This opens the door to possible tests using existing astronomical data. According to the study, q-desics could differ from ordinary geodesics over cosmological distances, affecting how matter and light propagate across the universe.

    “The q-desics might be distinguished from geodesics at cosmological large distances,” Koch says, “which would be an observable manifestation of quantum gravity effects.”

    Cosmological tensions

    The researchers propose revisiting cosmological observations. “Currently, there are many tensions popping up between the Standard Model of cosmology and observed data,” Koch notes. “All these tensions are linked, one way or another, to the use of geodesics at vastly different distance scales.” The q-desic framework offers a new lens through which to examine such discrepancies.

    So far, the team has explored simplified scenarios and idealized models of quantum space–time. Extending the framework to more realistic situations will require substantial effort.

    “The initial work was done with one PhD student (Ali Riahina) and one colleague (Ángel Rincón),” Koch says. “There are many things to be revisited and explored that our to-do list is growing far too long for just a few people.” One immediate goal is to encourage other researchers to engage with the idea and test it in different theoretical settings.

    Whether q-desics will provide an observational window into quantum gravity remains to be seen. But by shifting attention from the smallest scales to the largest structures in the cosmos, the work offers a fresh perspective on an enduring problem.

    The research is described in Physical Review D.

    The post Motion through quantum space–time is traced by ‘q-desics’ appeared first on Physics World.

    https://physicsworld.com/a/motion-through-quantum-space-time-is-traced-by-q-desics/
    No Author

    From building a workforce to boosting research and education – future quantum leaders have their say

    Matin Durrani talks to four leaders from quantum science and technology about where the field is going next

    The post From building a workforce to boosting research and education – future quantum leaders have their say appeared first on Physics World.

    The International Year of Quantum Science and Technology has celebrated all the great developments in the sector – but what challenges and opportunities lie in store? That was the question deliberated by four future leaders in the field at the Royal Institution in central London in November. The discussion took place during the two-day conference “Quantum science and technology: the first 100 years; our quantum future”, which was part of a week-long series of quantum-related events in the UK organized by the Institute of Physics.

    As well as outlining the technical challenges in their fields, the speakers all stressed the importance of developing a “skills pipeline” so that the quantum sector has enough talented people to meet its needs. Also vital will be the need to communicate the mysteries and potential of quantum technology – not just to the public but to industrialists, government officials and venture capitalists.

    Two of the speakers – Nicole Gillett (Riverlane) and Muhammad Hamza Waseem (Quantinuum) – are from the quantum tech industry, with Mehul Malik (Heriot-Watt University) and Sarah Alam Malik (University College London) based in academia. The following is an edited version of the discussion.

    Quantum’s future leaders

    Muhammad Hamza Waseem, Sarah Alam Malik, Mehul Malik, Nicole Gillett and Matin Durrani
    Deep thinkers The challenges and opportunities for quantum science and technology were discussed during a conference organized by the Institute of Physics at the Royal Institution on 5 November 2025 by (left to right, seated) Muhammad Hamza Waseem; Sarah Alam Malik; Mehul Malik; and Nicole Gillett. The discussion was chaired by Physics World editor-in-chief Matin Durrani (standing, far right). (Courtesy: Tushna Commissariat)

    Nicole Gillett is a senior software engineer at Riverlane, in Cambridge, UK. The company is a leader in quantum error correction, which is a critical part of a fully functioning, fault-tolerant quantum computer. Errors arise because quantum bits, or qubits, are so fragile and correcting them is far trickier than with classical devices. Riverlane is therefore trying to find ways to correct for errors without disturbing a device’s quantum states. Gillett is part of a team trying to understand how best to implement error-correcting algorithms on real quantum-computing chips.

    Mehul Malik, who studied physics at a liberal arts college in New York, was attracted to quantum physics because of what he calls a “weird middle ground between artistic creative thought and the rigour of physics”. After doing a PhD at the University of Rochester, he spent five years as a postdoc with Anton Zeilinger at the University of Vienna in Austria before moving to Heriot-Watt University in the UK. As head of its Beyond Binary Quantum Information research group, Malik works on quantum information processing and communication and fundamental studies of entanglement.

    Sarah Alam Malik is a particle physicist at University College London, using particle colliders to detect and study potential candidates for dark matter. She is also trying to use quantum computers to speed up the discovery of new physics given that what she calls “our most cherished and compelling theories” for physics beyond the Standard Model, such as supersymmetry, have not yet been seen. In particular, Malik is trying to find new physics in a way that’s “model agnostic” – in other words, using quantum computers to search particle-collision data for anomalous events that have not been seen before.

    Muhammad Hamza Waseem studied electrical engineering in Pakistan, but got hooked on quantum physics after getting involved in recreating experiments to test Bell’s inequalities in what he claims was the first quantum optics lab in the country. Waseem then moved to the the University of Oxford in the UK, to do a PhD studying spin waves to make classical and quantum logic circuits. Unable to work when his lab shut during the COVID-19 pandemic, Waseem approached Quantinuum to see if he could help them in their quest to build quantum computers using ion traps. Now based at the company, he studies how quantum computers can do natural-language processing. “Think ChatGPT, but powered with quantum computers,” he says.

    What will be the biggest or most important application of quantum technology in your field over the next 10 years?

    Nicole Gillett: If you look at roadmaps of quantum-computing companies, you’ll find that IBM, for example, intends to build the world’s first utility scale and fault-tolerant quantum computer by the end of the decade. Beyond 2033, they’re committing to have a system that could support 2000 “logical qubits”, which are essentially error-corrected qubits, in which the data of one qubit has been encoded into many qubits.

    What can be achieved with that number of qubits is a difficult question to answer but some theorists, such as Juan Maldacena, have proposed some very exotic ideas, such as using a system of 7000 qubits to simulate black-hole dynamics. Now that might not be a particularly useful industry application, but it tells you about the potential power of a machine like this.

    Mehul Malik: In my field, quantum networks that can distribute individual quantum particles or entangled states over large and short distances will have a significant impact within the next 10 years. Quantum networks will connect smaller, powerful quantum processors to make a larger quantum device, whether for computing or communication. The technology is quite mature – in fact, we’ve already got a quantum network connecting banks in London.

    I will also add something slightly controversial. We often try to distinguish between quantum and non-quantum technologies, but what we’re heading towards is combining classical state-of-the-art devices with technology based on inherently quantum effects – what you might call “quantum adjacent technology”. Single-photon detectors, for example, are going to revolutionize healthcare, medical imaging and even long-distance communication.

    Sarah Alam Malik: For me, the biggest impact of quantum technology will be applying quantum computing algorithms in physics. Can we quantum simulate the dynamics of, say, proton–proton collisions in a more efficient and accurate manner? Can we combine quantum computing with machine learning to sift through data and identify anomalous collisions that are beyond those expected from the Standard Model?

    Quantum technology is letting us ask very fundamental questions about nature.

    Sarah Alam Malik, University College London

    Quantum technology, in other words, is letting us ask very fundamental questions about nature. Emerging in theoretical physics, for example, is the idea that the fundamental layer of reality may not be particles and fields, but units of quantum information. We’re looking at the world through this new quantum-theoretic lens and asking questions like, whether it’s possible to measure entanglement in top quarks and even explore Bell-type inequalities at particle colliders.

    One interesting quantity is “magic”, which is a measure of how far you are from having something that can be classically simulable (Phys. Rev. D 110 116016). The more magic there is in a system the less easy it is to simulate classically – and therefore  the greater the computational resource it possesses for quantum computing. We’re asking how much “magic” there is in, for instance, top quarks produced at the Large Hadron Collider. So one of the most important developments for me may well be asking questions in a very different way to before.

    Muhammad Hamza Waseem: Technologically speaking, the biggest impact will be simulating quantum systems using a quantum computer. In fact, researchers from Google already claim to have simulated a wormhole in a quantum computer, albeit a very simple version that could have been tackled with a classical device (Nature 612 55).

    But the most significant impact has to do with education. I believe quantum theory teaches us that reality is not about particles and individuals – but relations. I’m not saying that particles don’t exist but they emerge from the relations. In fact, with colleagues at the University of Oxford, we’ve used this idea to develop a new way of teaching quantum theory, called Quantum in Pictures.

    We’ve already tried our diagrammatic approach with a group of 16–18-year-olds, teaching them the entire quantum-information course that’s normally given to postgraduates at Oxford. At the end of our two-month course, which had one lecture and tutorial per week, students took an exam with questions from past Oxford papers. An amazing 80% of students passed and half got distinctions.

    For quantum theory to have a big impact, we have to make quantum physics more accessible to everyone.

    Muhammad Hamza Waseem, Quantinuum

    I’ve also tried the same approach on pupils in Pakistan: the youngest, who was just 13, can now explain quantum teleportation and quantum entanglement. My point is that for quantum theory to have a big impact, we have to make quantum physics more accessible to everyone.

    What will be the biggest challenges and difficulties over the next 10 years for people in quantum science and technology?

    Nicole Gillett: The challenge will be building up a big enough quantum workforce. Sometimes people hear the words “quantum computer” and get scared, worrying they’re going to have to solve Hamiltonians all the time. But is it possible to teach students at high-school level about these concepts? Can we get the ideas across in a way that is easy to understand so people are interested and excited about quantum computing?

    At Riverlane, we’ve run week-long summer workshops for the last two years, where we try to teach undergraduate students enough about quantum error correction so they can do “decoding”. That’s when you take the results of error correction and try to figure out what errors occurred on your qubits. By combining lectures and hands-on tutorials we found we could teach students about error corrections – and get them really excited too.

    Our biggest challenge will be not having a workforce ready for quantum computing.

    Nicole Gillett, Riverlane

    We had students from physics, philosophy, maths and computer science take the course – the only pre-requisite, apart from being curious about quantum computers, is some kind of coding ability. My point is that these kinds of boot camps are going to be so important to inspire future generations. We need to make the information accessible to people because otherwise our biggest challenge will be not having a workforce ready for quantum computing.

    Mehul Malik: One of the big challenges is international cooperation and collaboration. Imagine if, in the early days of the Internet, the US military had decided they’d keep it to themselves for national-security reasons or if CERN hadn’t made the World Wide Web open source. We face the same challenge today because we live in a world that’s becoming polarized and protectionist – and we don’t want that to hamper international collaboration.

    Over the last few decades, quantum science has developed in a very international way and we have come so far because of that. I have lived in four different continents, but when I try to recruit internationally, I face significant hurdles from the UK government, from visa fees and so on. To really progress in quantum tech, we need to collaborate and develop science in a way that’s best for humanity not just for each nation.

    Sarah Alam Malik: One of the most important challenges will be managing the hype that inevitably surrounds the field right now. We’ve already seen this with artificial intelligence (AI), which has gone though the whole hype cycle. Lots of people were initially interested, then the funding dried up when reality didn’t match expectations. But now AI has come back with such resounding force that we’re almost unprepared for all the implications of it.

    Quantum can learn from the AI hype cycle, finding ways to manage expectations of what could be a very transformative technology. In the near- and mid-term, we need to not overplay things and be cautious of this potentially transformative technology – yet be braced for the impact it could potentially have. It’s a case of balancing hype with reality.

    Muhammad Hamza Waseem: Another important challenge is how to distribute funding between research on applications and research on foundations. A lot of the good technology we use today emerged from foundational ideas in ways that were not foreseen by the people originally working on them. So we must ensure that foundational research gets the funding it deserves or we’ll hit a dead end at some point.

    Will quantum tech alter how we do research, just as AI could do?

    Mehul Malik: AI is already changing how I do research, speeding up the way I discover knowledge. Using Google Gemini, for example, I now ask my browser questions instead of searching for specific things. But you still have to verify all the information you gather, for example, by checking the links it cites. I recently asked AI a complex physics question to which I knew the answer and the solution it gave was terrible. As for how quantum is changing research, I’m less sure, but better detectors through quantum-enabled research will certainly be good.

    Muhammad Hamza Waseem: AI is already being deployed in foundational research, for example, to discover materials for more efficient batteries. A lot of these applications could be integrated with quantum computing in some way to speed work up. In other words, a better understanding of quantum tech will let us develop AI that is safer, more reliable, more interpretable – and if something goes wrong, you know how to fix it. It’s an exciting time to be a researcher, especially in physics.

    Sarah Alam Malik: I’ve often wondered if AI, with the breadth of knowledge that it has across all different fields, already has answers to questions that we couldn’t answer – or haven’t been able to answer – just because of the boundaries between disciplines. I’m a physicist and so can’t easily solve problems in biology. But could AI help us to do breakthrough research at the interface between disciplines?

    What lessons can we learn from the boom in AI when it comes to the long-term future of quantum tech?

    Nicole Gillett: As a software engineer, I once worked at an Internet security company called CloudFlare, which taught me that it’s never too early to be thinking about how any new technology – both AI and quantum – might be abused. What’s also really interesting is whether AI and machine learning can be used to build quantum computers by developing the coding algorithms they need. Companies like Google are active in this area and so are Riverlane too.

    Mehul Malik: I recently discussed this question with a friend who works in AI, who said that the huge AI boom in industry, with all the money flowing in to it, has effectively killed academic research in the field. A lot of AI research is now industry-led and goal-orientated – and there’s a risk that the economic advantages of AI will kill curiosity-driven research. The remedy, according to my friend, is to pay academics in AI more as they are currently being offered much larger salaries to work in the private sector.

    We need to diversify so that the power to control or chart the course of quantum technologies is not in the hands of a few privileged monopolies.

    Mehul Malik, Heriot-Watt University

    Another issue is that a lot of power is in the hands a just a few companies, such as Nvidia and ASML. The lesson for the quantum sector is that we need to diversify early on so that the power to control or chart the course of quantum technologies is not in the hands of a few privileged monopolies.

    Sarah Alam Malik: Quantum technology has a lot to learn from AI, which has shown that we need to break down the barriers between disciplines. After all, some of the most interesting and impactful research in AI has happened because companies can hire whoever they need to work on a particular problem, whether it’s a computer scientist, a biologist, a chemist, a physicist or a mathematician.

    Nature doesn’t differentiate between biology and physics. In academia we not only need people who are hyper specialized but also a crop of generalists who are knee-deep in one field but have experience in other areas too.

    The lesson from the AI boom is to blur the artificial boundaries between disciplines and make them more porous. In fact, quantum is a fantastic playground for that because it is inherently interdisciplinary. You have to bring together people from different disciplines to deliver this kind of technology.

    Muhammad Hamza Waseem: AI research is in a weird situation where there are lots of excellent applications but so little is understood about how AI machines work. We have no good scientific theory of intelligence or of consciousness. We need to make sure that quantum computing research does not become like that and that academic research scientists are well-funded and not distracted by all the hype that industry always creates.

    At the start of the previous century, the mathematician David Hilbert said something like “physics is becoming too difficult for the physicists”. I think quantum computing is also somewhat becoming too challenging for the quantum physicists. We need everyone to get involved for the field to reach its true potential.

    Towards “green” quantum technology

    Green leaf on the converging point of computer circuit board
    (Courtesy: iStock/Peach)

    Today’s AI systems use vast amounts of energy, but should we also be concerned about the environmental impact of quantum computers? Google, for example, has already carried out quantum error-correction experiments in which data from the company’s quantum computers had to be processed once every microsecond per round of error correction (Nature 638 920). “Finding ways to process it to keep up with the rate at which it’s being generated is a very interesting area of research,” says Nicole Gillett.

    However, quantum computers could cut our energy consumption by allowing calculations to be performed far more quickly and efficiently than is possible with classical machines. For Mehul Malik, another important step towards “green” quantum technology will be to lower the energy that quantum devices require and to build detectors that work at room temperature and are robust against noise. Quantum computers themselves can also help, he thinks, by discovering energy-efficient technologies, materials and batteries.

    A quantum laptop?

    Futuristic abstract low poly wireframe vector illustration with glowing briefcase and speech bubbles
    (Courtesy: iStock/inkoly)

    Will we ever see portable quantum computers or will they always be like today’s cloud-computing devices in distant data centres? Muhammad Hamza Waseem certainly does not envisage a word processor that uses a quantum computer. But he points to companies like SPINQ, which has built a two quantum bit computer for educational purposes. “In a sense, we already have a portable quantum computer,” he says. For Mehul Malik, though, it’s all about the market. “If there’s a need for it,” he joked, “then somebody will make it.”

    If I were science minister…

    Politician speaking to reporters illustration
    (Courtesy: Shutterstock/jenny on the moon)

    When asked by Peter Knight – one of the driving forces behind the UK’s quantum-technology programme – what the panel would do if they were science minister, Nicole Gillett said she would seek to make the UK the leader in quantum computing by investing heavily in education. Mehul Malik would cut the costs of scientists moving across borders, pointing out that many big firms have been founded by immigrants. Sarah Alam Malik called for long-term funding – and not to give up if short-term gains don’t transpire. Muhammad Hamza Waseem, meanwhile, said we should invest more in education, research and the international mobility of scientists.

    This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

    Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

    Find out more on our quantum channel.

    The post From building a workforce to boosting research and education – future quantum leaders have their say appeared first on Physics World.

    https://physicsworld.com/a/from-building-a-workforce-to-boosting-research-and-education-future-quantum-leaders-have-their-say/
    Matin Durrani

    Will this volcano explode, or just ooze? A new mechanism could hold some answers

    Discovery of shear-induced bubble formation sheds more light on the divide between eruption and effusion

    The post Will this volcano explode, or just ooze? A new mechanism could hold some answers appeared first on Physics World.

    A figure containing a diagram of a volcanic system and a photo of bubbles forming in a container
    Bubbling up: A schematic representation of a volcanic system and a snapshot of one of the team’s experiments. The shear-induced bubbles are marked with red ellipses. (Courtesy: O Roche)

    An international team of researchers has discovered a new mechanism that can trigger the formation of bubbles in magma – a major driver of volcanic eruptions. The finding could improve our understanding of volcanic hazards by improving models of magma flow through conduits beneath Earth’s surface.

    Volcanic eruptions are thought to occur when magma deep within the Earth’s crust decompresses. This decompression allows volatile chemicals dissolved in the magma to escape in gaseous form, producing bubbles. The more bubbles there are in the viscous magma, the faster it will rise, until eventually it tears itself apart.

    “This process can be likened to a bottle of sparkling water containing dissolved volatiles that exolve when the bottle is opened and the pressure is released,” explains Olivier Roche, a member of the volcanology team at the Magmas and Volcanoes Laboratory (LMV) at the Université Clermont Auvergne (UCA) in France and lead author of the study.

    Magma shearing forces could induce bubble nucleation

    The new work, however, suggests that this explanation is incomplete. In their study, Roche and colleagues at UCA, the French National Research Institute for Sustainable Development (IRD), Brown University in the US and ETH Zurich in Switzerland began with the assumption that the mechanical energy in magma comes from the pressure gradient between the nucleus of a gas bubble and the ambient liquid. “However, mechanical energy may also be provided by shear stress in the magma when it is in motion,” Roche notes. “We therefore hypothesized that magma shearing forces could induce bubble nucleation too.”

    To test their theory, the researchers reproduced the internal movements of magma in liquid polyethylene oxide saturated with carbon dioxide at 80°C. They then set up a device to observe bubble nucleation in situ while the material was experiencing shear stress. They found that the energy provided by viscous shear is large enough to trigger bubble formation – even if decompression isn’t present.

    The effect, which the team calls shear-induced bubble nucleation, depends on the magma’s viscosity and on the amount of gas it contains. According to Roche, the presence of this effect could help researchers determine whether an eruption is likely to be explosive or effusive. “Understanding which mechanism is at play is fundamental for hazard assessment,” he says. “If many gas bubbles grow deep in the volcano conduit in a volatile-rich magma, for example, they can combine with each other and form larger bubbles that then open up degassing conduits connected to the surface.

    “This process will lead to effusive eruptions, which is counterintuitive (but supported by some earlier observations),” he tells Physics World. “It calls for the development of new conduit flow models to predict eruptive style for given initial conditions (essentially volatile content) in the magma chamber.”

    Enhanced predictive power

    By integrating this mechanism into future predictive models, the researchers aim to develop tools that anticipate the intensity of eruptions better, allowing scientists and local authorities to improve the way they manage volcanic hazards.

    Looking ahead, they are planning new shear experiments on liquids that contain solid particles, mimicking crystals that form in magma and are believed to facilitate bubble nucleation. In the longer term, they plan to study combinations of shear and compression, though Roche acknowledges that this “will be challenging technically”.

    They report their present work in Science.

    The post Will this volcano explode, or just ooze? A new mechanism could hold some answers appeared first on Physics World.

    https://physicsworld.com/a/will-this-volcano-explode-or-just-ooze-a-new-mechanism-could-hold-some-answers/
    Isabelle Dumé

    Remote work expands collaboration networks but reduces research impact, study suggests

    Despite a 'concerning decline' in citation impact, there were, however, benefits to increasing remote interactions

    The post Remote work expands collaboration networks but reduces research impact, study suggests appeared first on Physics World.

    Academics who switch to hybrid working and remote collaboration do less impactful research. That’s according to an analysis of how scientists’ collaboration networks and academic outputs evolved before, during and after the COVID-19 pandemic (arXiv: 2511.18481). It involved studying author data from the arXiv preprint repository and the online bibliographic catalogue OpenAlex.

    To explore the geographic spread of collaboration networks, Sara Venturini from the Massachusetts Institute of Technology and colleagues looked at the average distance between the institutions of co-authors. They found that while the average distance between team members on publications increased from 2000 to 2021, there was a particularly sharp rise after 2022.

    This pattern, the researchers claim, suggests that the pandemic led to scientists collaborating more often with geographically distant colleagues. They found consistent patterns when they separated papers related to COVID-19 from those in unrelated areas, suggesting the trend was not solely driven by research on COVID-19.

    The researchers also examined how the number of citations a paper received within a year of publication changed with distance between the co-authors’ institutions. In general, as the average distance between collaborators increases, citations fall, the authors found.

    They suggest that remote and hybrid working hampers research quality by reducing spontaneous, serendipitous in-person interactions that can lead to deep discussions and idea exchange.

    Despite what the authors say is a “concerning decline” in citation impact, there are, however, benefits to increasing remote interactions. In particular, as the geography of collaboration networks increases, so too does international partnerships and authorship diversity.

    Remote tools

    Lingfei Wu, a computational social scientist at the University of Pittsburgh, who was not involved in the study, told Physics World that he was surprised by the finding that remote teams produce less impactful work.

    “In our earlier research, we found that historically, remote collaborations tended to produce more impactful but less innovative work,” notes Wu. “For example, the Human Genome Project published in 2001 shows how large, geographically distributed teams can also deliver highly impactful science. One would expect the pandemic-era shift toward remote collaboration to increase impact rather than diminish it.”

    Wu says his work suggests that remote work is effective for implementing ideas but less effective for generating them, indicating that scientists need a balance between remote and in-person interactions. “Use remote tools for efficient execution, but reserve in-person time for discussion, brainstorming, and informal exchange,” he adds.

    The post Remote work expands collaboration networks but reduces research impact, study suggests appeared first on Physics World.

    https://physicsworld.com/a/remote-work-expands-collaboration-networks-but-reduces-research-impact-study-suggests/
    No Author

    How well do you know AI? Try our interactive quiz to find out

    Test your knowledge of the deep connections between physics, big data and AI

    The post How well do you know AI? Try our interactive quiz to find out appeared first on Physics World.

    There are 12 questions in total: blue is your current question and white means unanswered, with green and red being right and wrong. Check your scores at the end – and why not test your colleagues too?

    How did you do?

    10–12 Top shot – congratulations, you’re the next John Hopfield

    7–9 Strong skills – good, but not quite Nobel standard

    4–6 Weak performance – should have asked ChatGPT

    0–3 Worse than random – are you a bot?

    The post How well do you know AI? Try our interactive quiz to find out appeared first on Physics World.

    https://physicsworld.com/a/how-well-do-you-know-ai-try-our-interactive-quiz-to-find-out/
    No Author

    International Year of Quantum Science and Technology quiz

    What do you really know about quantum physics?

    The post International Year of Quantum Science and Technology quiz appeared first on Physics World.

    This quiz was first published in February 2025. Now you can enjoy it in our new interactive quiz format and check your final score. There are 18 questions in total: blue is your current question and white means unanswered, with green and red being right and wrong.

     

    The post International Year of Quantum Science and Technology quiz appeared first on Physics World.

    https://physicsworld.com/a/international-year-of-quantum-science-and-technology-quiz/
    Matin Durrani

    Components of RNA among life’s building blocks found in NASA asteroid sample

    Samples from the near-Earth asteroid Bennu found to contain molecules and compounds vital to the origin of life

    The post Components of RNA among life’s building blocks found in NASA asteroid sample appeared first on Physics World.

    More molecules and compounds vital to the origin of life have been detected in asteroid samples delivered to Earth by NASA’s OSIRIS-REx mission. The discovery strengthens the case that not only did life’s building blocks originate in space, but that the ingredients of RNA, and perhaps RNA itself, were brought to our planet by asteroids.

    Two new papers in Nature Geoscience and Nature Astronomy describe the discovery of the sugars ribose and glucose in the 120 g of samples returned from the near-Earth asteroid 101955 Bennu, as well as an unusual carbonaceous “gum” that holds important compounds for life. The findings complement the earlier discovery of amino acids and the nucleobases of RNA and DNA in the Bennu samples.

    A third new paper, in Nature Astronomy, addresses the abundance of pre-solar grains, which is dust that originated from before the birth of our Solar System, such as dust from supernovae. Scientists led by Ann Nguyen of NASA’s Johnson Space Center found six times more dust direct from supernova explosions than is found, on average, in meteorites and other sampled asteroids. This could suggest differences in the concentration of different pre-solar dust grains in the disc of gas and dust that formed the Solar System.

    Space gum

    It’s the discovery of organic materials useful for life that steals the headlines, though. For example, the discovery of the space gum, which is essentially a hodgepodge chain of polymers, represents something never found in space before.

    Scott Sandford of NASA’s Ames Research Center, co-lead author of the Nature Astronomy paper describing the gum discovery, tells Physics World: “The material we see in our samples is a bit of a molecular jumble. It’s carbonaceous, but much richer in nitrogen and, to a lesser extent, oxygen, than most of the organic compounds found in extraterrestrial materials.”

    Sandford refers to the material as gum because of its pliability, bending and dimpling when pressure is applied, rather like chewing gum. And while much of its chemical functionality is replicated in similar materials on our planet, “I doubt it matches exactly with anything seen on Earth,” he says.

    Initially, Sandford found the gum using an infrared microscope, nicknaming the dust grains containing the gum “Lasagna” and “Neapolitan” because the grains are layered. To extract them from the rock in the sample, Sandford went to Zack Gainsforth of the University of California, Berkeley, who specializes in analysing and extracting materials from samples like this.

    Platinum scaffolding

    Having welded a tungsten needle to the Neapolitan sample in order to lift it, the pair quickly realised that the grain was very delicate.

    “When we tried to lift the sample it began to deform,” Gainsforth says. “Scott and I practically jumped out of our chairs and brainstormed what to do. After some discussion, we decided that we should add straps to give it enough mechanical rigidity to survive the lift.”

    Microscopic particle of asteroid Bennu
    Fragile sample A microscopic particle of asteroid Bennu is manipulated under a transmission electron microscope. To move the 30 µm fragment for further analysis, the researchers reinforced it with thin platinum strips (the L shape on the surface). (Courtesy: NASA/University of California, Berkeley)

    By straps, Gainsforth is referring to micro-scale platinum scaffolding applied to the grain to reinforce its structure while they cut it away with an ion beam. Platinum is often used as a radiation shield to protect samples from an ion beam, “but how we used it was anything but standard,” says Gainsforth. “Scott and I made an on-the-fly decision to reinforce the samples based on how they were reacting to our machinations.”

    With the sample extracted and reinforced, they used the ion beam cutter to shave it down until it was a thousand times thinner than a human hair, at which point it could be studied by electron microscopy and X-ray spectrometry. “It was a joy to watch Zack ‘micro-manipulate’ [the sample],” says Sandford.

    The nitrogen in the gum was found to be in nitrogen heterocycles, which are the building blocks of nucleobases in DNA and RNA. This brings us to the other new discovery, reported in Nature Geoscience, of the sugars ribose and glucose in the Bennu samples, by a team led by Yoshihiro Furukawa of Tohoku University in Japan.

    The ingredients of RNA

    Glucose is the primary source of energy for life, while ribose is a key component of the sugar-phosphate backbone that connects the information-carrying nucleobases in RNA molecules. Furthermore, the discovery of ribose now means that everything required to assemble RNA molecules is present in the Bennu sample.

    Notable by its absence, however, was deoxyribose, which is ribose minus one oxygen atom. Deoxyribose in DNA performs the same job as ribose in RNA, and Furukawa believes that its absence supports a popular hypothesis about the origin of life on Earth called RNA world. This describes how the first life could have used RNA instead of DNA to carry genetic information, catalyse biochemical reactions and self-replicate.

    Intriguingly, the presence of all RNA’s ingredients on Bennu raises the possibility that RNA could have formed in space before being brought to Earth.

    “Formation of RNA from its building blocks requires a dehydration reaction, which we can expect to have occurred both in ancient Bennu and on primordial Earth,” Furukawa tells Physics World.

    However, RNA would be very hard to detect because of its expected low abundance in the samples, making identifying it very difficult. So until there’s information to the contrary, “the present finding means that the ingredients of RNA were delivered from space to the Earth,” says Furukawa.

    Nevertheless, these discoveries are major milestones in the quest of astrobiologists and space chemists to understand the origin of life on Earth. Thanks to Bennu and the asteroid 162173 Ryugu, from which a sample was returned by the Japanese Aerospace Exploration Agency (JAXA) mission Hayabusa2, scientists are increasingly confident that the building blocks of life on Earth came from space.

    The post Components of RNA among life’s building blocks found in NASA asteroid sample appeared first on Physics World.

    https://physicsworld.com/a/components-of-rna-among-lifes-building-blocks-found-in-nasa-asteroid-sample/
    No Author

    Institute of Physics celebrates 2025 Business Award winners at parliamentary event

    Some 14 firms have won IOP business awards in 2025, bringing total tally to 102

    The post Institute of Physics celebrates 2025 Business Award winners at parliamentary event appeared first on Physics World.

    A total of 14 physics-based firms in sectors from quantum and energy to healthcare and aerospace have won 2025 Business Awards from the Institute of Physics (IOP), which publishes Physics World. The awards were presented at a reception in the Palace of Westminster yesterday attended by senior parliamentarians and policymakers as well as investors, funders and industry leaders.

    The IOP Business Awards, which have been running since 2012, recognise the role that physics and physicists play in the economy, creating jobs and growth “by powering innovation to meet the challenges facing us today, ranging from climate change to better healthcare and food production”. More than 100 firms have now won Business Awards, with around 90% of those companies still commercially active.

    The parliamentary event honouring the 2025 winners were hosted by Dave Robertson, the Labour MP for Lichfield, who spent 10 years as a physics teacher in Birmingham before working for teaching unions. There was also a speech from Baron Sharma, who studied applied physics before moving into finance and later becoming a Conservative MP, Cabinet minister and president of the COP-26 climate summit.

    Seven firms were awarded 2025 IOP Business Innovation Awards, which recognize companies that have “delivered significant economic and/or societal impact through the application of physics”. They include Oxford-based Tokamak Energy, which has developed “compact, powerful, robust, quench-resilient” high-temperature superconducting magnets for commercial fusion energy and for  propulsion systems, accelerators and scientific instruments.

    (courtesy: Carmen Valino)

    Oxford Instruments was honoured for developing a novel analytical technique for scanning electron microscopes, enabling new capabilities and accelerating time to results by at least an order of magnitude. Ionoptika, meanwhile, was recognized for developing Q-One, which is a new generation of focused ion-beam instrumentation, providing single atom through to high-dose nanoscale advanced materials engineering for photonic and quantum technologies.

    The other four winners were: electronics firm FlexEnable for their organic transistor materials; Lynkeos Technology for the development of muonography in the nuclear industry; the renewable energy company Sunamp for their thermal storage system; and the defence and security giant Thales UK for the development of a solid-state laser for laser rangefinders.

    Business potential

    Six other companies have won an IOP Start-up Award, which celebrates young companies “with a great business idea founded on a physics invention, with the potential for business growth and significant societal impact”. They include Astron Systems for developing “long-lifetime turbomachinery to enable multi-reuse small rocket engines and bring about fully reusable small launch vehicles”, along with MirZyme Therapeutics for “pioneering diagnostics and therapeutics to eliminate preeclampsia and transform maternal health”.

    The other four winners were: Celtic Terahertz Technology for a metamaterial filter technology; Nellie Technologies for a algae-based carbon removal technology; Quantum Science for their development of short-wave infrared quantum dot technology; and Wayland Additive for the development and commercialisation of charge-neutralised electron beam metal additive manufacturing.

    James McKenzie, a former vice-president for business at the IOP, who was involved in judging the awards, says that all awardees are “worthy winners”. “It’s the passion, skill and enthusiasm that always impresses me,” McKenzie told Physics World.

    iFAST Diagnostics were also awarded the IOP Lee Lucas Award that recognises early-stage companies taking innovative products into the medical and healthcare sector. The firm, which was spun out of the University of Southampton, develops blood tests that can test the treatment of bacterial infections in a matter of hours rather than days. They are expecting to have approval for testing next year.

    “Especially inspiring was the team behind iFAST,” adds McKenzie, “who developed a method to test very rapid tests cutting time from 48 hours to three hours, so patients can be given the right antibiotics.”

    “The award-winning businesses are all outstanding examples of what can be achieved when we build upon the strengths we have, and drive innovation off the back of our world-leading discovery science,” noted Tom Grinyer, IOP chief executive officer. “In the coming years, physics will continue to shape our lives, and we have some great strengths to build upon here in the UK, not only in specific sectors such as quantum, semiconductors and the green economy, but in our strong academic research and innovation base, our growing pipeline of spin-out and early-stage companies, our international collaborations and our growing venture capital community.”

    For the full list of winners, see here.

    The post Institute of Physics celebrates 2025 Business Award winners at parliamentary event appeared first on Physics World.

    https://physicsworld.com/a/institute-of-physics-celebrates-2025-business-award-winners-at-parliamentary-event/
    Michael Banks

    Leftover gamma rays produce medically important radioisotopes

    GeV-scale bremsstrahlung from an electron accelerator can be used to produce copper-64 and copper-67

    The post Leftover gamma rays produce medically important radioisotopes appeared first on Physics World.

    The “leftover” gamma radiation produced when the beam of an electron accelerator strikes its target is usually discarded. Now, however, physicists have found a new use for it: generating radioactive isotopes for diagnosing and treating cancer. The technique, which piggybacks on an already-running experiment, uses bremsstrahlung from an accelerator facility to trigger nuclear reactions in a layer of zinc foil. The products of these reactions include copper isotopes that are hard to make using conventional techniques, meaning that the technique could reduce their costs and expand access to treatments.

    Radioactive nuclides are commonly used to treat cancer, and so-called theranostic pairs are especially promising. These pairs occur when one isotope of an element provides diagnostic imaging while another delivers therapeutic radiation – a combination that enables precision tumour targeting to improve treatment outcomes.

    One such pair is 64Cu and 67Cu: the former emits positrons that can identify tumours in PET scans while the latter produces beta particles that can destroy cancerous cells. They also have a further clinical advantage in that copper binds to antibodies and other biomolecules, allowing the isotopes to be delivered directly into cells. Indeed, these isotopes have already been used to treat cancer in mice, and early clinical studies in humans are underway.

    “Wasted” photons might be harnessed

    Researchers led by Mamad Eslami of the University of York, UK have now put forward a new way to make both isotopes. Their method exploits the fact that gamma rays generated by the intense electron beams in particle accelerator experiments interact only weakly with matter (relative to electrons or neutrons, at least). This means that many of them pass right through their primary target and into a beam dump. These “wasted” photons still carry enough energy to drive further nuclear reactions, though, and Eslami and colleagues realized that they could be harnessed to produce 64Cu and 67Cu.

    Eslami and colleagues tested their idea at the Mainz Microtron, an electron accelerator at Johannes Gutenberg University Mainz in Germany. “We wanted to see whether GeV-scale bremsstrahlung, already available at the electron accelerator, could be used in a truly parasitic configuration,” Eslami says. The real test, he adds, was whether they could produce 67Cu alongside the primary experiment, which was using the same electron beam and photon field to study hadron physics, without disturbing it or degrading the beam conditions.

    The answer turned out to be “yes”. What’s more, the researchers found that their approach could produce enough 67Cu for medical applications in about five days – roughly equal to the time required for a nuclear reactor to produce the equivalent amount of another important medical radionuclide, lutetium-177.

    Improving nuclear medicine treatments and reducing costs

    “Our results indicate that, under suitable conditions, high-energy electron and photon facilities that were originally built for nuclear or particle physics experiments could also be used to produce 67Cu and other useful radionuclides,” Eslami tells Physics World. In practice, however, Eslami adds that this will be only realistic at sites with a strong, well-characterized bremsstrahlung fields. High-power multi-GeV electron facilities such as the planned Electron-Ion Collider at Brookhaven National Laboratory in the US, or a high-repetition laser-plasma electron source, are two possibilities.

    Even with this restriction, team member Mikhail Bashkanov is excited about the advantages. “If we could do away with the necessity of using nuclear reactors to produce medical isotopes and solely generate them with high-energy photon beams from laser-plasma accelerators, we could significantly improve nuclear medicine treatments and reduce their costs,” Bashkanov says.

    The researchers, who detail their work in Physical Review C, now plan to test their method at other electron accelerators, especially those with higher beam power and GeV-scale beams, to quantify the 67Cu yields they can expect to achieve in realistic target and beam-dump configurations. In parallel, Eslami adds, they want to explore parasitic operation at emerging laser-plasma-driven electron sources that are being developed for muon tomography. They would also like to link their irradiation studies to target design, radiochemistry and timing constraints to see whether the method can deliver clinically useful activities of 67Cu and other useful isotopes in a reliable and cost-effective way.

    The post Leftover gamma rays produce medically important radioisotopes appeared first on Physics World.

    https://physicsworld.com/a/leftover-gamma-rays-produce-medically-important-radioisotopes/
    Isabelle Dumé

    Top 10 Breakthroughs of the Year in physics for 2025 revealed

    A molecular superfluid, high-resolution microscope and a protein qubit are on our list

    The post Top 10 Breakthroughs of the Year in physics for 2025 revealed appeared first on Physics World.

    Physics World Top 10 breakthroughsPhysics World is delighted to announce its Top 10 Breakthroughs of the Year for 2025, which includes research in astronomy, antimatter, atomic and molecular physics and more. The Top Ten is the shortlist for the Physics World Breakthrough of the Year, which will be revealed on Thursday 18 December.

    Our editorial team has looked back at all the scientific discoveries we have reported on since 1 January and has picked 10 that we think are the most important. In addition to being reported in Physics World in 2025, the breakthroughs must meet the following criteria:

    • Significant advance in knowledge or understanding
    • Importance of work for scientific progress and/or development of real-world applications
    • Of general interest to Physics World readers

    Here, then, are the Physics World Top 10 Breakthroughs for 2025, listed in no particular order. You can listen to Physics World editors make the case for each of our nominees in the Physics World Weekly podcast. And, come back next week to discover who has bagged the 2025 Breakthrough of the Year.

    Finding the stuff of life on an asteroid

    Tim McCoy and Cari Corrigan
    Analysing returned samples Tim McCoy (right), curator of meteorites at the Smithsonian’s National Museum of Natural History, and research geologist Cari Corrigan examine scanning electron microscope (SEM) images of a Bennu sample. (Courtesy: James Di Loreto, Smithsonian)

    To Tim McCoy, Sara Russell, Danny Glavin, Jason Dworkin, Yoshihiro Furukawa, Ann Nguyen, Scott Sandford, Zack Gainsforth and an international team of collaborators for identifying salt, ammonia, sugar, nitrogen- and oxygen-rich organic materials, and traces of metal-rich supernova dust, in samples returned from the near-Earth asteroid 101955 Bennu. The incredible chemical richness of this asteroid, which NASA’s OSIRIS-REx spacecraft visited in 2020, lends support to the longstanding hypothesis that asteroid impacts could have “seeded” the early Earth with the raw ingredients needed for life to form. The discoveries also enhance our understanding of how Bennu and other objects in the solar system formed out of the disc of material that coalesced around the young Sun.

    The first superfluid molecule

    To Takamasa Momose of the University of British Columbia, Canada, and Susumu Kuma of the RIKEN Atomic, Molecular and Optical Physics Laboratory, Japan for observing superfluidity in a molecule for the first time. Molecular hydrogen is the simplest and lightest of all molecules, and theorists predicted that it would enter a superfluid state at a temperature between 1‒2 K. But this is well below the molecule’s freezing point of 13.8 K, so Momose, Kuma and colleagues first had to develop a way to keep the hydrogen in a liquid state. Once they did that, they then had to work out how to detect the onset of superfluidity. It took them nearly 20 years, but by confining clusters of hydrogen molecules inside helium nanodroplets, embedding a methane molecule within the clusters, and monitoring the methane’s rotation, they were finally able to do it. They now plan to study larger clusters of hydrogen, with the aim of exploring the boundary between classical and quantum behaviour in this system.

    Hollow-core fibres break 40-year limit on light transmission

    To researchers at the University of Southampton and Microsoft Azure Fiber in the UK, for developing a new type of optical fibre that reduces signal loss, boosts bandwidth and promises faster, greener communications. The team, led by Francesco Poletti, achieved this feat by replacing the glass core of a conventional fibre with air and using glass membranes that reflect light at certain frequencies back into the core to trap the light and keep it moving through the fibre’s hollow centre. Their results show that the hollow-core fibres exhibit 35% less attenuation than standard glass fibres – implying that fewer amplifiers would be needed in long cables – and increase transmission speeds by 45%. Microsoft has begun testing the new fibres in real systems, installing segments in its network and sending live traffic through them. These trials open the door to gradual rollout and Poletti suggests that the hollow-core fibres could one day replace existing undersea cables.

    First patient treatments delivered with proton arc therapy

    Trento Proton Therapy Centre researchers
    PAT pioneers The research team in the proton therapy gantry room. (Courtesy: UO Fisica Sanitaria and UO Protonterapia, APSS, Trento)

    To Francesco Fracchiolla and colleagues at the Trento Proton Therapy Centre in Italy for delivering the first clinical treatments using proton arc therapy (PAT). Proton therapy – a precision cancer treatment – is usually performed using pencil-beam scanning to precisely paint the dose onto the tumour. But this approach can be limited by the small number of beam directions deliverable in an acceptable treatment time. PAT overcomes this by moving to an arc trajectory with protons delivered over a large number of beam angles and the potential to optimize the number of energies used for each beam direction. Working with researchers at RaySearch Laboratories in Sweden, the team performed successful dosimetric comparisons with clinical proton therapy plans. Following a feasibility test that confirmed the viability of clinical PAT delivery, the researchers used PAT to treat nine cancer patients. Importantly, all treatments were performed using the centre’s existing proton therapy system and clinical workflow.

    A protein qubit for quantum biosensing

    To Peter Maurer and David Awschalom at the University of Chicago Pritzker School of Molecular Engineering and colleagues for designing a protein quantum bit (qubit) that can be produced directly inside living cells and used as a magnetic field sensor. While many of today’s quantum sensors are based on nitrogen–vacancy (NV) centres in diamond, they are large and hard to position inside living cells. Instead, the team used fluorescent proteins, which are just 3 nm in diameter and can be produced by cells at a desired location with atomic precision. These proteins possess similar optical and spin properties to those of NV centre-based qubits – namely that they have a metastable triplet state. The researchers used a near-infrared laser pulse to optically address a yellow fluorescent protein and read out its triplet spin state with up to 20% spin contrast. They then genetically modified the protein to be expressed in bacterial cells and measured signals with a contrast of up to 8%. They note that although this performance does not match that of NV quantum sensors, it could enable magnetic resonance measurements directly inside living cells, which NV centres cannot do.

    First two-dimensional sheets of metal

    To Guangyu Zhang, Luojun Du and colleagues at the Institute of Physics of the Chinese Academy of Sciences for producing the first 2D sheets of metal. Since the discovery of graphene – a sheet of carbon just one atom thick – in 2004, hundreds of other 2D materials have been fabricated and studied. In most of these, layers of covalently bonded atoms are separated by gaps where neighbouring layers are held together only by weak van der Waals (vdW) interactions, making it relatively easy to “shave off” single layers to make 2D sheets. Many thought that making atomically thin metals, however, would be impossible given that each atom in a metal is strongly bonded to surrounding atoms in all directions. The technique developed by Zhang and Du and colleagues involves heating powders of pure metals between two monolayer-MoS2/sapphire vdW anvils. Once the metal powders are melted into a droplet, the researchers applied a pressure of 200 MPa and continued this “vdW squeezing” until the opposite sides of the anvils cooled to room temperature and 2D sheets of metal were formed. The team produced five atomically thin 2D metals – bismuth, tin, lead, indium and gallium – with the thinnest being around 6.3 Å. The researchers say their work is just the “tip of the iceberg” and now aim to study fundamental physics with the new materials.

    Quantum control of individual antiprotons

    Photo of a physicist working at the BASE experiment
    Exquisite control Physicist Barbara Latacz at the BASE experiment at CERN. (Courtesy: CERN)

    To CERN’s BASE collaboration for being the first to perform coherent spin spectroscopy on a single antiproton – the antimatter counterpart of the proton. Their breakthrough is the most precise measurement yet of the antiproton’s magnetic properties, and could be used to test the Standard Model of particle physics. The experiment begins with the creation of high-energy antiprotons in an accelerator. These must be cooled (slowed down) to cryogenic temperatures without being lost to annihilation. Then, a single antiproton is held in an ultracold electromagnetic trap, where microwave pulses manipulate its spin state. The resulting resonance peak was 16 times narrower than previous measurements, enabling a significant leap in precision. This level of quantum control opens the door to highly sensitive comparisons of the properties of matter (protons) and antimatter (antiprotons). Unexpected differences could point to new physics beyond the Standard Model and may also reveal why there is much more matter than antimatter in the visible universe.

    A smartphone-based early warning system for earthquakes

    To Richard Allen, director of the Berkeley Seismological Laboratory at the University of California, Berkeley, and Google’s Marc Stogaitis and colleagues for creating a global network of Android smartphones that acts as an earthquake early warning system. Traditional early warning systems use networks of seismic sensors that rapidly detect earthquakes in areas close to the epicentre and issue warnings across the affected region. Building such seismic networks, however, is expensive, and many earthquake-prone regions do not have them. The researchers utilized the accelerometer in millions of phones in 98 countries to create the Android Earthquake Alert (AEA) system. Testing the app between 2021 and 2024 led to the detection of an average of 312 earthquakes a month, with magnitudes ranging from 1.9 to 7.8. For earthquakes of magnitude 4.5 or higher, the system sent “TakeAction” alerts to users, sending them, on average, 60 times per month for an average of 18 million individual alerts per month. The system also delivered lesser “BeAware” alerts to regions expected to experience a shaking intensity of magnitude 3 or 4. The team now aims to produce maps of ground shaking, which could assist the emergency response services following an earthquake.

    A “weather map” for a gas giant exoplanet

    To Lisa Nortmann at Germany’s University of Göttingen and colleagues for creating the first detailed “weather map” of an exoplanet. The forecast for exoplanet WASP-127b is brutal with winds reaching 33,000 km/hr, which is much faster than winds found anywhere in the Solar System. The WASP-127b is a gas giant located about 520 light–years from Earth and the team used the CRIRES+ instrument on the European Southern Observatory’s Very Large Telescope to observe the exoplanet as it transited across its star in less than 7 h. Spectral analysis of the starlight that filtered through WASP-127b’s atmosphere revealed Doppler shifts caused by supersonic equatorial winds. By analysing the range of Doppler shifts, the team created a rough weather map of  WASP-127b, even though they could not resolve light coming from specific locations on the exoplanet. Nortmann and colleagues concluded that the exoplanet’s poles are cooler that the rest of WASP-127b, where temperatures can exceed 1000 °C. Water vapour was detected in the atmosphere, raising the possibility of exotic forms of rain.

    Highest-resolution images ever taken of a single atom

    To the team led by Yichao Zhang at the University of Maryland and Pinshane Huang of the University of Illinois at Urbana-Champaign for capturing the highest-resolution images ever taken of individual atoms in a material. The team used an electron-microscopy technique called electron ptychography to achieve a resolution of 15 pm, which is about 10 times smaller than the size of an atom. They studied a stack of two atomically-thin layers of tungsten diselenide, which were rotated relative to each other to create a moiré superlattice. These twisted 2D materials are of great interest to physicists because their electronic properties can change dramatically with small changes in rotation angle. The extraordinary resolution of their microscope allowed them to visualize collective vibrations in the material called moiré phasons. These are similar to phonons, but had never been observed directly until now. The team’s observations align with theoretical predictions for moiré phasons. Their microscopy technique should boost our understanding of the role that moiré phasons and other lattice vibrations play in the physics of solids. This could lead to the engineering of new and useful materials.

    ROPP banner

    Physics World‘s coverage of the Breakthrough of the Year is supported by Reports on Progress in Physics, which offers unparalleled visibility for your ground-breaking research.

    The post Top 10 Breakthroughs of the Year in physics for 2025 revealed appeared first on Physics World.

    https://physicsworld.com/a/top-10-breakthroughs-of-the-year-in-physics-for-2025-revealed/
    Hamish Johnston

    Exploring this year’s best physics research in our Top 10 Breakthroughs of 2025

    Lively chat about exoplanet weather, proton arc therapy, 2D metals and more

    The post Exploring this year’s best physics research in our Top 10 Breakthroughs of 2025 appeared first on Physics World.

    This episode of the Physics World Weekly podcast features a lively discussion about our Top 10 Breakthroughs of 2025, which include important research in quantum sensing, planetary science, medical physics, 2D materials and more. Physics World editors explain why we have made our selections and look at the broader implications of this impressive body of research.

    The top 10 serves as the shortlist for the Physics World Breakthrough of the Year award, the winner of which will be announced on 18 December.

    Links to all the nominees, more about their research and the selection criteria can be found here.

    ROPP banner

    Physics World‘s coverage of the Breakthrough of the Year is supported by Reports on Progress in Physics, which offers unparalleled visibility for your ground-breaking research.

    The post Exploring this year’s best physics research in our Top 10 Breakthroughs of 2025 appeared first on Physics World.

    https://physicsworld.com/a/exploring-this-years-best-physics-research-in-our-top-10-breakthroughs-of-2025/
    Hamish Johnston

    Astronomers observe a coronal mass ejection from a distant star

    Burst from M-dwarf star could be powerful enough to strip the atmosphere of any planets that orbit it, with implications for the search for extraterrestrial life

    The post Astronomers observe a coronal mass ejection from a distant star appeared first on Physics World.

    The Sun regularly produces energetic outbursts of electromagnetic radiation called solar flares. When these flares are accompanied by flows of plasma, they are known as coronal mass ejections (CMEs). Now, astronomers at the Netherlands Institute for Radio Astronomy (ASTRON) have spotted a similar event occurring on a star other than our Sun – the first unambiguous detection of a CME outside our solar system.

    Astronomers have long predicted that the radio emissions associated with CMEs from other stars should be detectable. However, Joseph Callingham, who led the ASTRON study, says that he and his colleagues needed the highly sensitive low-frequency radio telescope LOFAR – plus ESA’s XMM-Newton space observatory and “some smart software” developed by Cyril Tasse and Philippe Zarka at the Observatoire de Paris-PSL, France – to find one.

    A short, intense radio signal from StKM 1-1262

    Using these tools, the team detected short, intense radio signals from a star located around 40 light-years away from Earth. This star, called StKM 1-1262, is very different from our Sun. At only around half of the Sun’s mass, it is classed as an M-dwarf star. It also rotates 20 times faster and boasts a magnetic field 300 times stronger. Nevertheless, the burst it produced had the same frequency, time and polarization properties as the plasma emission from an event called a solar type II burst that astronomers identify as a fast CME when it comes from the Sun.

    “This work opens up a new observational frontier for studying and understanding eruptions and space weather around other stars,” says Henrik Eklund, an ESA research fellow working at the European Space Research and Technology Centre (ESTEC) in Noordwijk, Netherlands, who was not involved in the study. “We’re no longer limited to extrapolating our understanding of the Sun’s CMEs to other stars.”

    Implications for life on exoplanets

    The high speed of this burst – around 2400 km/s – would be atypical for our own Sun, with only around 1 in every 20 solar CMEs reaching that level. However, the ASTRON team says that M-dwarfs like StKM 1-1262 could emit CMEs of this type as often as once a day.

    An artist's impression of the XMM-Newton telescope, showing the telescope against a black, starry background with the Earth nearby
    Spotting a distant coronal mass ejection: An artist’s impression of XMM-Newton. (Courtesy: ESA/C Carreau)

    According to Eklund, this has implications for extraterrestrial life, as most of the known planets in the Milky Way are thought to orbit stars of this type, and such bursts could be powerful enough to strip their atmospheres. “It seems that intense space weather may be even more extreme around smaller stars – the primary hosts of potentially habitable exoplanets,” he says. “This has important implications for how these planets keep hold of their atmospheres and possibly remain habitable over time.”

    Erik Kuulkers, a project scientist at XMM-Newton who was also not directly involved in the study, suggests that this atmosphere-stripping ability could modify the way we hunt for life in stellar systems akin to our Solar System. “A planet’s habitability for life as we know it is defined by its distance from its parent star – whether or not it sits within the star’s ‘habitable zone’, a region where liquid water can exist on the surface of planets with suitable atmospheres,” Kuulkers says. “What if that star was especially active, regularly producing CMEs, however? A planet regularly bombarded by these ejections might lose its atmosphere entirely, leaving behind a barren uninhabitable world, despite its orbit being ‘just right’.

    Kuulkers adds that the study’s results also contain lessons for our own Solar System. “Why is there still life on Earth despite the violent material being thrown at us?” he asks. “It is because we are safeguarded by our atmosphere.”

    Seeking more data

    The ASTRON team’s next step will be to look for more stars like StKM 1-1262, which Kuulkers agrees is a good idea. “The more events we can find, the more we learn about CMEs and their impact on a star’s environment,” he says. Additional observations at other wavelengths “would help”, he adds, “but we have to admit that events like the strong one reported on in this work don’t happen too often, so we also need to be lucky enough to be looking at the right star at the right time.”

    For now, the ASTRON researchers, who report their work in Nature, say they have reached the limit of what they can detect with LOFAR. “The next step is to use the next generation Square Kilometre Array, which will let us find many more such stars since it is so much more sensitive,” Callingham tells Physics World.

    The post Astronomers observe a coronal mass ejection from a distant star appeared first on Physics World.

    https://physicsworld.com/a/astronomers-observe-a-coronal-mass-ejection-from-a-distant-star/
    Isabelle Dumé

    Sterile neutrinos: KATRIN and MicroBooNE come up empty handed

    Fourth flavour not seen in beta-decay and oscillation

    The post Sterile neutrinos: KATRIN and MicroBooNE come up empty handed appeared first on Physics World.

    Two major experiments have found no evidence for sterile neutrinos – hypothetical particles that could help explain some puzzling observations in particle physics. The KATRIN experiment searched for sterile neutrinos that could be produced during the radioactive decay of tritium; whereas the MicroBooNE experiment looked for the effect of sterile neutrinos on the transformation of muon neutrinos into electron neutrinos.

    Neutrinos are low-mass subatomic particles with zero electric charge that interact with matter only via the weak nuclear force and gravity. This makes neutrinos difficult to detect, despite the fact that the particles are produced in copious numbers by the Sun, nuclear reactors and collisions in particle accelerators.

    Neutrinos were first proposed in 1930 to explain the apparent missing momentum, spin and energy in the radioactive beta decay of nuclei. The they were first observed in 1956 and by 1975 physicists were confident that three types (flavours) of neutrino existed – electron, muon and tau – along with their respective antiparticles. At the same time, however, it was becoming apparent that something was amiss with the Standard Model description of neutrinos because the observed neutrino flux from sources like the Sun did not tally with theoretical predictions.

    Gaping holes

    Then in the late 1990s experiments in Canada and Japan revealed that neutrinos of one flavour transform into other flavours as then propagate through space. This quantum phenomenon is called neutrino oscillation and requires that neutrinos have both flavour and mass. Takaaki Kajita and Art McDonald shared the 2015 Nobel Prize for Physics for this discovery – but that is not the end of the story.

    One gaping hole in our knowledge is that physicists do not know the neutrino masses – having only measured upper limits for the three flavours. Furthermore, there is some experimental evidence that the current Standard-Model description of neutrino oscillation is not quite right. This includes lower-than-expected neutrino fluxes from some beta-decaying nuclei and some anomalous oscillations in neutrino beams.

    One possible explanation for these oscillation anomalies is the existence of a fourth type of neutrino. Because we have yet to detect this particle, the assumption is that it does not interact via the weak interaction – which is why these hypothetical particles are called sterile neutrinos.

    Electron energy curve

    Now, two very different neutrino experiments have both reported no evidence of sterile neutrinos. One is KATRIN, which is located at the Karlsruhe Institute of Technology (KIT) in Germany. It has the prime mission of making a very precise measurement of the mass of the electron antineutrino. The idea is to measure the energy spectrum of electrons emitted in the beta decay of tritium and infer an upper limit on the mass of the electron antineutrino from the shape of the curve.

    If sterile neutrinos exist, then they could sometimes be emitted in place of electron antineutrinos during beta decay. This would change the electron energy spectrum – but this was not observed at KATRIN.

    “In the measurement campaigns underlying this analysis, we recorded over 36 million electrons and compared the measured spectrum with theoretical models. We found no indication of sterile neutrinos,” says Kathrin Valerius of the Institute for Astroparticle Physics at KIT and co-spokesperson of the KATRIN collaboration.

    Meanwhile, physicists on the MicroBooNE experiment at Fermilab in the US have looked for evidence for sterile neutrinos in how muon neutrinos oscillate into electron neutrinos. Beams of muon neutrinos are created by firing a proton beam at a solid target. The neutrinos at Fermilab then travel several hundred metres (in part through solid ground) to MicroBooNE’s liquid-argon time projection chamber. This detects electron neutrinos with high spatial and energy resolution, allowing detailed studies of neutrino oscillations.

    If sterile neutrinos exist, they would be involved in the oscillation process and would therefore affect the number of electron neutrinos detected by MicroBooNE. Neutrino beams from two different sources were used in the experiments, but no evidence for sterile neutrinos was found.

    Together, these two experiments rule out sterile neutrinos as an explanation for some – but not all – previously observed oscillation anomalies. So more work is needed to fully understand neutrino physics. Indeed, current and future neutrino experiments are well placed to discover physics beyond the Standard Model, which could lead to solutions to some of the greatest mysteries of physics.

    “Any time you rule out one place where physics beyond the Standard Model could be, that makes you look in other places,” says Justin Evans at the UK’s University of Manchester, who is co-spokesperson for MicroBooNE. “This is a result that is going to really spur a creative push in the neutrino physics community to come up with yet more exciting ways of looking for new physics.”

    Both groups report their results in papers in Nature: Katrin paper; MicroBooNE paper.

    The post Sterile neutrinos: KATRIN and MicroBooNE come up empty handed appeared first on Physics World.

    https://physicsworld.com/a/sterile-neutrinos-katrin-and-microboone-come-up-empty-handed/
    Hamish Johnston

    Bridging borders in medical physics: guidance, challenges and opportunities

    New book provides expert advice for those looking to participate in global health initiatives

    The post Bridging borders in medical physics: guidance, challenges and opportunities appeared first on Physics World.

    Book cover: Global Medical Physics: A Guide for International Collaboration
    Educational aid Global Medical Physics: A Guide for International Collaboration explores the increasing role of medical physicists in international collaborations. The book comes in paperback, hardback and ebook format. An open-access ebook will be available in the near future. (Courtesy: CRC Press/Taylor & Francis)

    As the world population ages and the incidence of cancer and cardiac disease grows alongside, there’s an ever-increasing need for reliable and effective diagnostics and treatments. Medical physics plays a central role in both of these areas – from the development of a suite of advanced diagnostic imaging modalities to the ongoing evolution of high-precision radiotherapy techniques.

    But access to medical physics resources – whether equipment and infrastructure, education and training programmes, or the medical physicists themselves – is massively imbalanced around the world. In low- and middle-income countries (LMICs), fewer than 50% of patients have access to radiotherapy, with similar shortfalls in the availability of medical imaging equipment. Lower-income countries also have the least number of medical physicists per capita.

    This disparity has led to an increasing interest in global health initiatives, with professional organizations looking to provide support to medical physicists in lower income regions. Alongside, medical physicists and other healthcare professionals seek to collaborate internationally in clinical, educational and research settings.

    Successful multicultural collaborations, however, can be hindered by cultural, language and ethical barriers, as well as issues such as poor access to the internet and the latest technology advances. And medical physicists trained in high-income contexts may not always understand the circumstances and limitations of those working within lower income environments.

    Aiming to overcome these obstacles, a new book entitled Global Medical Physics: A Guide for International Collaboration provides essential guidance for those looking to participate in such initiatives. The text addresses the various complexities of partnering with colleagues in different countries and working within diverse healthcare environments, encompassing clinical and educational medical physics circles, as well as research and academic environments.

    “I have been involved in providing support to medical physicists in lower income contexts for a number of years, especially through the International Atomic Energy Agency (IAEA), but also through professional organizations like the American Association of Physicists in Medicine (AAPM),” explains the book’s editor Jacob Van Dyk, emeritus professor at Western University in Canada. “It is out of these experiences that I felt it might be appropriate and helpful to provide some educational materials that address these issues. The outcome was this book, with input from those with these collaborative experiences.”

    Shared experience

    The book brings together contributions from 34 authors across 21 countries, including both high- and low-resource settings. The authors – selected for their expertise and experience in global health and medical physics activities – provide guidelines for success, as well as noting potential barriers and concerns, on a wide range of themes targeted at multiple levels of expertise.

    This guidance includes, for example: advice on how medical physicists can contribute to educational, clinical and research-based global collaborations and the associated challenges; recommendations on building global inter-institutional collaborations, covering administrative, clinical and technical challenges and ethical issues; and a case study on the Radiation Planning Assistant project, which aims to use automated contouring and treatment planning to assist radiation oncologists in LMICs.

    In another chapter, the author describes the various career paths available to medical physicists, highlighting how they can help address the disparity in healthcare resources through their careers. There’s also a chapter focusing on CERN as an example of a successful collaboration engaging a worldwide community, including a discussion of CERN’s involvement in collaborative medical physics projects.

    With the rapid emergence of artificial intelligence (AI) in healthcare, the book takes a look at the role of information and communication technologies and AI within global collaborations. Elsewhere, authors highlight the need for data sharing in medical physics, describing example data sharing applications and technologies.

    Other chapters consider the benefits of cross-sector collaborations with industry, sustainability within global collaborations, the development of effective mentoring programmes – including a look at challenges faced by LMICs in providing effective medical physics education and training – and equity, diversity and inclusion and ethical considerations in the context of global medical physics.

    The book rounds off by summarizing the key topics discussed in the earlier chapters. This information is divided into six categories: personal factors, collaboration details, project preparation, planning and execution, and post-project considerations.

    “Hopefully, the book will provide an awareness of factors to consider when involved in global international collaborations, not only from a high-income perspective but also from a resource-constrained perspective,” says Van Dyk. “It was for this reason that when I invited authors to develop chapters on specific topics, they were encouraged to invite a co-author from another part of the world, so that it would broaden the depth of experience.”

    The post Bridging borders in medical physics: guidance, challenges and opportunities appeared first on Physics World.

    https://physicsworld.com/a/bridging-borders-in-medical-physics-guidance-challenges-and-opportunities/
    Tami Freeman

    Can we compare Donald Trump’s health chief to Soviet science boss Trofim Lysenko?

    Robert P Crease notes parallels between US and Soviet science

    The post Can we compare Donald Trump’s health chief to Soviet science boss Trofim Lysenko? appeared first on Physics World.

    The US has turned Trofim Lysenko into a hero.

    Born in 1898, Lysenko was a Ukrainian plant breeder, who in 1927 found he could make pea and grain plants develop at different rates by applying the right temperatures to their seeds. The Soviet news organ Pravda was enthusiastic, saying his discovery could make crops grow in winter, turn barren fields green, feed starving cattle and end famine.

    Despite having trained as a horticulturist, Lysenko rejected the then-emerging science of genetics in favour of Lamarckism, according to which organisms can pass on acquired traits to offspring. This meshed well with the Soviet philosophy of “dialectical materialism”, which sees both the natural and human worlds as evolving not through mechanisms but environment.

    Stalin took note of Lysenko’s activities and had him installed as head of key Soviet science agencies. Once in power, Lysenko dismissed scientists who opposed his views, cancelled their meetings, funded studies of discredited theories, and stocked committees with loyalists. Although Lysenko had lost his influence by the time Stalin died in 1953 – with even Pravda having turned against him – Soviet agricultural science had been destroyed.

    A modern parallel

    Lysenko’s views and actions have a resonance today when considering the activities of Robert F Kennedy Jr, who was appointed by Donald Trump as secretary of the US Department of Health and Human Services in February 2025. Of course, Trump has repeatedly sought to impose his own agenda on US science, with his destructive impact outlined in a detailed report published by the Union of Concerned Scientists in July 2025.

    Last May Trump signed executive order 14303, “Restoring Gold Standard Science”, which blasts scientists for not acting “in the best interests of the public”. He has withdrawn the US from the World Health Organization (WHO), ordered that Federal-sponsored research fund his own priorities, redefined the hazards of global warming, and cancelled the US National Climate Assessment (NSA), which had been running since 2000.

    But after Trump appointed Kennedy, the assault on science continued into US medicine, health and human services. In what might be called a philosophy of “political materialism”, Kennedy fired all 17 members of the Advisory Committee on Immunization Practices of the US Centers for Disease Control and Prevention (CDC), cancelled nearly $500m in mRNA vaccine contracts, hired a vaccine sceptic to study its connection with autism despite numerous studies that show no connection, and ordered the CDC to revise its website to reflect his own views on the cause of autism.

    In his 2021 book The Real Anthony Fauci: Bill Gates, Big Pharma, and the Global War on Democracy and Public Health, Kennedy promotes not germ theory but what he calls “miasma theory”, according to which diseases are prevented by nutrition and lifestyle.

    Divergent stories

    Of course, there are fundamental differences between the 1930s Soviet Union and the 2020s United States. Stalin murdered and imprisoned his opponents, while the US administration only defunds and fires them. Stalin and Lysenko were not voted in, while Trump came democratically to power, with elected representatives confirming Kennedy. Kennedy has also apologized for his most inflammatory remarks, though Stalin and Lysenko never did (nor does Trump for that matter).

    What’s more, Stalin’s and Lysenko’s actions were more grounded in apparent scientific realities and social vision than Trump’s or Kennedy’s. Stalin substantially built up much of the Soviet science and technology infrastructure, whose dramatic successes include launching the first Earth satellite Sputnik in 1957. Though it strains credulity to praise Stalin, his vision to expand Soviet agricultural production during a famine was at least plausible and its intention could be portrayed as humanitarian. Lysenko was a scientist, Kennedy is not.

    As for Lysenko, his findings seemed to carry on those of his scientific predecessors. Experimentally, he expanded the work of Russian botanist Ivan Michurin, who bred new kinds of plants able to grow in different regions. Theoretically, his work connected not only with dialectical materialism but also with that of the French naturalist Jean-Baptiste Lamarck, who claimed that acquired traits can be inherited.

    Trump and Kennedy are off-the-wall by comparison. Trump has called climate change a con job and hoax and seeks to stop research that says otherwise. In 2019 he falsely stated that Hurricane Dorian was predicted to hit Alabama, then ordered the National Oceanic and Atmospheric Administration to issue a statement supporting him. Trump has said he wants the US birth rate to rise and that he will be the “fertilization president”, but later fired fertility and IVF researchers at the CDC.

    As for Kennedy, he has said that COVID-19 “is targeted to attack Caucasians and Black people” and that Ashkenazi Jews and Chinese are the most immune (he disputed the remark, but it’s on video). He has also sought to retract a 2025 vaccine study from the Annals of Internal Medicine (178 1369) that directly refuted his views on autism.

    The critical point

    US Presidents often have pet scientific projects. Harry Truman created the National Science Foundation, Dwight D Eisenhower set up NASA, John F Kennedy started the Apollo programme, while Richard Nixon launched the Environmental Protection Agency (EPA) and the War on Cancer. But it’s one thing to support science that might promote a political agenda and another to quash science that will not.

    One ought to be able to take comfort in the fact that if you fight nature, you lose – except that the rest of us lose as well. Thanks to Lysenko’s actions, the Soviet Union lost millions of tons of grain and hundreds of herds of cattle. The promise of his work evaporated and Stalin’s dreams vanished.

    Lysenko, at least, was motivated by seeming scientific promise and social vision; the US has none. Trump has damaged the most important US scientific agencies, destroyed databases and eliminated the EPA’s research arm, while Kennedy has replaced health advisory committees with party loyalists.

    While Kennedy may not last his term – most Trump Cabinet officials don’t – the paths he has sent science policy on surely will. For Trump and Kennedy, the policy seems to consist only of supporting pet projects. Meanwhile, cases of measles in the US have reached their highest level in three decades, the seas continue to rise and the climate is changing. It is hard to imagine how enemy agents could damage US science more effectively.

    The post Can we compare Donald Trump’s health chief to Soviet science boss Trofim Lysenko? appeared first on Physics World.

    https://physicsworld.com/a/can-we-compare-donald-trumps-health-chief-to-soviet-science-boss-trofim-lysenko/
    Robert P Crease

    Diagnosing brain cancer without a biopsy

    A black phosphorus-based system detects micro-RNA in aqueous humor, enabling safe diagnosis of Primary Central Nervous System Lymphoma

    The post Diagnosing brain cancer without a biopsy appeared first on Physics World.

    Early diagnosis of primary central nervous system lymphoma (PCNSL) remains challenging because brain biopsies are invasive and imaging often lacks molecular specificity. A team led by researchers at Shenzhen University has now developed a minimally invasive fibre-optic plasmonic sensor capable of detecting PCNSL-associated microRNAs in the eye’s aqueous humor with attomolar sensitivity.

    At the heart of the approach is a black phosphorus (BP)–engineered surface plasmon resonance (SPR) interface. An ultrathin BP layer is deposited on a gold-coated fiber tip. Because of the work-function difference between BP and gold, electrons transfer from BP into the Au film, creating a strongly enhanced local electric field at the metal–semiconductor interface. This BP–Au charge-transfer nano-interface amplifies refractive-index changes at the surface far more efficiently than conventional metal-only SPR chips, enabling the detection of molecular interactions that would otherwise be too subtle to resolve and pushing the limit of detection down to 21 attomolar without nucleic-acid amplification. The BP layer also provides a high-area, biocompatible surface for immobilizing RNA reporters.

    To achieve sequence specificity, the researchers integrated CRISPR-Cas13a, an RNA-guided nuclease that becomes catalytically active only when its target sequence is perfectly matched to a designed CRISPR RNA (crRNA). When the target microRNA (miR-21) is present, activated Cas13a cleaves RNA reporters attached to the BP-modified fiber surface, releasing gold nanoparticles and reducing the local refractive index. The resulting optical shift is read out in real time through the SPR response of the BP-enhanced fiber probe, providing single-nucleotide-resolved detection directly on the plasmonic interface.

    With this combined strategy, the sensor achieved a limit of detection of 21 attomolar in buffer and successfully distinguished single-base-mismatched microRNAs. In tests on aqueous-humor samples from patients with PCNSL, the CRISPR-BP-FOSPR assay produced results that closely matched clinical qPCR data, despite operating without any amplification steps.

    Because aqueous-humor aspiration is a minimally invasive ophthalmic procedure, this BP-driven plasmonic platform may offer a practical route for early PCNSL screening, longitudinal monitoring, and potentially the diagnosis of other neurological diseases reflected in eye-fluid biomarkers. More broadly, the work showcases how black-phosphorus-based charge-transfer interfaces can be used to engineer next-generation, fibre-integrated biosensors that combine extreme sensitivity with molecular precision.

    Read the full article

    Ultra-sensitive detection of microRNA in intraocular fluid using optical fiber sensing technology for central nervous system lymphoma diagnosis

    Yanqi Ge et al 2025 Rep. Prog. Phys. 88 070502

    Do you want to learn more about this topic?

    Theoretical and computational tools to model multistable gene regulatory networks by Federico Bocci, Dongya Jia, Qing Nie, Mohit Kumar Jolly and José Onuchic (2023)

    The post Diagnosing brain cancer without a biopsy appeared first on Physics World.

    https://physicsworld.com/a/diagnosing-brain-cancer-without-a-biopsy/
    Lorna Brigham

    5f electrons and the mystery of δ-plutonium

    Scientists uncover the role of magnetic fluctuations in the counterintuitive behaviour of this rare plutonium phase

    The post 5f electrons and the mystery of δ-plutonium appeared first on Physics World.

    Plutonium is considered a fascinating element. It was first chemically isolated in 1941 at the University of California, but its discovery was hidden until after the Second World War. There are six distinct allotropic phases of plutonium with very different properties. At ambient pressure, continuously increasing the temperature converts the room-temperature, simple monoclinic a phase through five phase transitions, the final one occurring at approximately 450°C.

    The delta (δ) phase is perhaps the most interesting allotrope of plutonium. δ-plutonium is technologically important, has a very simple crystal structure, but its electronic structure has been debated for decades. Researchers have attempted to understand its anomalous behaviour and how the properties of δ-plutonium are connected to the 5f electrons.

    The 5f electrons are found in the actinide group of elements which includes plutonium. Their behaviour is counterintuitive. They are sensitive to temperature, pressure and composition, and behave in both a localised manner, staying close to the nucleus and in a delocalised (itinerant) manner, more spread out and contributing to bonding. Both these states can support magnetism depending on actinide element. The 5f electrons contribute to δ-phase stability, anomalies in the material’s volume and bulk modulus, and to a negative thermal expansion where the δ-phase reduces in size when heated.

    Research group from Lawrence Livermore National Laboratory
    Research group from Lawrence Livermore National Laboratory. Left to right: Lorin Benedict, Alexander Landa, Kyoung Eun Kweon, Emily Moore, Per Söderlind, Christine Wu, Nir Goldman, Randolph Hood and Aurelien Perron. Not in image: Babak Sadigh and Lin Yang (Courtesy: Blaise Douros/Lawrence Livermore National Laboratory)

    In this work, the researchers present a comprehensive model to predict the thermodynamic behaviour of δ-plutonium, which has a face-centred cubic structure. They use density functional theory, a computational technique that explores the overall electron density of the system and incorporate relativistic effects to capture the behaviour of fast-moving electrons and complex magnetic interactions. The model includes a parameter-free orbital polarization mechanism to account for orbital-orbital interactions, and incorporates anharmonic lattice vibrations and magnetic fluctuations, both transverse and longitudinal modes, driven by temperature-induced excitations. Importantly, it is shown that negative thermal expansion results from magnetic fluctuations.

    This is the first model to integrate electronic effects, magnetic fluctuations, and lattice vibrations into a cohesive framework that aligns with experimental observations and semi-empirical models such as CALPHAD. It also accounts for fluctuating states beyond the ground state and explains how gallium composition influences thermal expansion. Additionally, the model captures the positive thermal expansion behaviour of the high-temperature epsilon phase, offering new insight into plutonium’s complex thermodynamics.

    Read the full article

    First principles free energy model with dynamic magnetism for δ-plutonium

    Per Söderlind et al 2025 Rep. Prog. Phys. 88 078001

    Do you want to learn more about this topic?

    Pu 5f population: the case for n = 5.0 J G Tobin and M F Beaux II (2025)

    The post 5f electrons and the mystery of δ-plutonium appeared first on Physics World.

    https://physicsworld.com/a/5f-electrons-and-the-mystery-of-%ce%b4-plutonium/
    Lorna Brigham

    Scientists explain why ‘seeding’ clouds with silver iodide is so efficient

    New characterization of the material's surface reveals how an atom-level rearrangement aids the formation of ice crystals and promotes precipitation

    The post Scientists explain why ‘seeding’ clouds with silver iodide is so efficient appeared first on Physics World.

    Silver iodide crystals have long been used to “seed” clouds and trigger precipitation, but scientists have never been entirely sure why the material works so well for that purpose. Researchers at TU Wien in Austria are now a step closer to solving the mystery thanks to a new study that characterized surfaces of the material in atomic-scale detail.

    “Silver iodide has been used in atmospheric weather modification programs around the world for several decades,” explains Jan Balajka from TU Wien’s Institute of Applied Physics, who led this research. “In fact, it was chosen for this purpose as far back as the 1940s because of its atomic crystal structure, which is nearly identical to that of ice – it has the same hexagonal symmetry and very similar distances between atoms in its lattice structure.”

    The basic idea, Balajka continues, originated with the 20th-century American atmospheric scientist Bernard Vonnegut, who suggested in 1947 that introducing small silver iodide (AgI) crystals into a cloud could provide nuclei for ice to grow on. But while Vonnegut’s proposal worked (and helped to inspire his brother Kurt’s novel Cat’s Cradle), this simple picture is not entirely accurate. The stumbling block is that nucleation occurs at the surface of a crystal, not inside it, and the atomic structure of an AgI surface differs significantly from its interior.

    A task that surface science has solved

    To investigate further, Balajka and colleagues used high-resolution atomic force microscopy (AFM) and advanced computer simulations to study the atomic structure of 2‒3 nm diameter AgI crystals when they are broken into two pieces. The team’s measurements revealed that the surfaces of both freshly cleaved structures differed from those found inside the crystal.

    More specifically, team member Johanna Hütner, who performed the experiments, explains that when an AgI crystal is cleaved, the silver atoms end up on one side while the iodine atoms appear on the other. This has implications for ice growth, because while the silver side maintains a hexagonal arrangement that provides an ideal template for the growth of ice layers, the iodine side reconstructs into a rectangular pattern that no longer lattice-matches the hexagonal symmetry of ice crystals. The iodine side is therefore incompatible with the epitaxial growth of hexagonal ice.

    “Our works solves this decades-long controversy of the surface vs bulk structure of AgI, and shows that structural compatibility does matter,” Balajka says.

    Difficult experiments

    According to Balajka, the team’s experiments were far from easy. Many experimental methods for studying the structure and properties of material surfaces are based on interactions with charged particles such as electrons or ions, but AgI is an electrical insulator, which “excludes most of the tools available,” he explains. Using AFM enabled them to overcome this problem, he adds, because this technique detects interatomic forces between a sharp tip and the surface and does not require a conductive sample.

    Another problem is that AgI is photosensitive and decomposes when exposed to visible light. While this property is useful in other contexts – AgI was a common ingredient in early photographic plates – it created complications for the TU Wien team. “Conventional AFM setups make use of optical laser detection to map the topography of a sample,” Balajka notes.

    To avoid destroying their sample while studying it, the researchers therefore had to use a non-contact AFM based on a piezoelectric sensor that detects electrical signals and does not require optical readout. They also adapted their setup to operate in near-darkness, using only red light while manipulating the Ag to ensure that stray light did not degrade the samples.

    The computational modelling part of the work introduced yet another hurdle to overcome. “Both Ag and I are atoms with a high number of electrons in their electron shells and are thus highly polarizable,” Balajka explains. “The interaction between such atoms cannot be accurately described by standard computational modelling methods such as density functional theory (DFT), so we had to employ highly accurate random-phase approximation (RPA) calculations to obtain reliable results.”

    Highly controlled conditions

    The researchers acknowledge that their study, which is detailed in Science Advances, was conducted under highly controlled conditions – ultrahigh vacuum, low pressure and temperature and a dark environment – that are very different from those that prevail inside real clouds. “The next logical step for us is therefore to confirm whether our findings hold under more representative conditions,” Balajka says. “We would like to find out whether the structure of AgI surfaces is the same in air and water, and if not, why.”

    The researchers would also like to better understand the atomic arrangement of the rectangular reconstruction of the iodine surface. “This would complete the picture for the use of AgI in ice nucleation, as well as our understanding of AgI as a material overall,” Balajka says.

    The post Scientists explain why ‘seeding’ clouds with silver iodide is so efficient appeared first on Physics World.

    https://physicsworld.com/a/scientists-explain-why-seeding-clouds-with-silver-iodide-is-so-efficient/
    Isabelle Dumé

    Slow spectroscopy sheds light on photodegradation

    Technique reveals how organic materials accumulate charge

    The post Slow spectroscopy sheds light on photodegradation appeared first on Physics World.

    Using a novel spectroscopy technique, physicists in Japan have revealed how organic materials accumulate electrical charge through long-term illumination by sunlight – leading to material degradation. Ryota Kabe and colleagues at the Okinawa Institute of Science and Technology have shown how charge separation occurs gradually via a rare multi-photon ionization process, offering new insights into how plastics and organic semiconductors degrade in sunlight.

    In a typical organic solar cell, an electron-donating material is interfaced with an electron acceptor. When the donor absorbs a photon, one of its electrons may jump across the interface, creating a bound electron-hole pair which may eventually dissociate – creating two free charges from which useful electrical work can be extracted.

    Although such an interface vastly boosts the efficiency of this process, it is not necessary for charge separation to occur when an electron donor is illuminated. “Even single-component materials can generate tiny amounts of charge via multiphoton ionization,” Kabe explains. “However, experimental evidence has been scarce due to the extremely low probability of this process.”

    To trigger charge separation in this way, an electron needs to absorb one or more additional photons while in its excited state. Since the vast majority of electrons fall back into their ground states before this can happen, the spectroscopic signature of this charge separation is very weak. This makes it incredibly difficult to detect using conventional spectroscopy techniques, which can generally only make observations over timescales of up to a few milliseconds.

    The opposite approach

    “While weak multiphoton pathways are easily buried under much stronger excited-state signals, we took the opposite approach in our work,” Kabe describes. “We excited samples for long durations and searched for traces of accumulated charges in the slow emission decay.”

    Key to this approach was an electron donor called NPD. This organic material has a relatively long triplet lifetime, where an excited electron is prevented from transitioning back to its ground state. As a result, these molecules emit phosphorescence over relatively long timescales.

    In addition, Kabe’s team dispersed their NPD samples into different host materials with carefully selected energy levels. In one medium, the energies of both the highest-occupied and lowest-unoccupied molecular orbitals lay below NPD’s corresponding levels, so that the host material acted as an electron acceptor. As a result, charge transfer occurred in the same way as it would across a typical donor-acceptor interface.

    Yet in another medium, the host’s lowest-unoccupied orbital lay above NPD’s – blocking charge transfer, and allowing triplet states to accumulate instead. In this case, the only way for charge separation to occur was through multi-photon ionization.

    Slow emission decay analysis

    Since NPD’s long triplet lifetime allowed its electrons to be excited gradually over an extended period of illumination, its weak charge accumulation became detectable through slow emission decay analysis. In contrast, more conventional methods involve multiple, ultra-fast laser pulses, severely restricting the timescale over which measurements can be made. Altogether, this approach enabled the team to clearly distinguish between the two charge generation pathways.

    “Using this method, we confirmed that charge generation occurred via resonance-enhanced multiphoton ionization mediated by long-lived triplet states, even in single-component organic materials,” Kabe describes.

    This result offers insights into how plastics and organic semiconductors are degraded by sunlight over years or decades. The conventional explanation is that sunlight generates free radicals. These are molecules that lose an electron through ionization, leaving behind an unpaired electron which readily reacts with other molecules in the surrounding environment. Since photodegradation unfolds over such a long timescale, researchers could not observe this charge generation in single-component organic materials – until now.

    “The method will be useful for analysing charge behaviour in organic semiconductor devices and for understanding long-term processes such as photodegradation that occur gradually under continuous light exposure,” Kabe says.

    The research is described in Science Advances.

    The post Slow spectroscopy sheds light on photodegradation appeared first on Physics World.

    https://physicsworld.com/a/slow-spectroscopy-sheds-light-on-photodegradation/
    No Author

    Fermilab opens new building dedicated to Tevatron pioneer Helen Edwards

    The Helen Edwards Engineering Research Center is designed to act as a collaborative space for scientists and engineers

    The post Fermilab opens new building dedicated to Tevatron pioneer Helen Edwards appeared first on Physics World.

    Fermilab has officially opened a new building named after the particle physicist Helen Edwards. Officials from the lab and the US Department of Energy (DOE) opened the Helen Edwards Engineering Research Center at a ceremony held on 5 December.  The new building is the lab’s largest purpose-built lab and office space since the lab’s iconic Wilson Hall, which was completed in 1974.

    Construction of the Helen Edwards Engineering Research Center began in 2019 and was completed three years later. The centre is an 7500 m2 multi-story lab and office building that is adjacent and connected to Wilson Hall.

    The new centre is designed as a collaborative lab where engineers, scientists and technicians design, build and test technologies across several areas of research such as neutrino science, particle detectors, quantum science and electronics.

    The centre also features cleanrooms, vibration-sensitive labs and cryogenic facilities in which the components of the near detector for the Deep Underground Neutrino Experiment will be assembled and tested.

    A pioneering spirit

    With a PhD in experimental particle physics from Cornell University, Edwards was heavily involved with commissioning the university’s 10 GeV electron synchrotron. In 1970 Fermilab’s director Robert Wilson appointed Edwards as associate head of the lab’s booster section and she later became head of the accelerator division.

    While at Fermilab, Edwards’ primary responsibility was designing, constructing, commissioning and operating the Tevatron, which led to the discoveries of the top quark in 1995 and the tau neutrino in 2000.

    Edwards retired in the early 1990s but continued to work as guest scientists at Fermilab and officially switched the Tevatron off during a ceremony held on 30 September 2011. Edwards died in 2016.

    Darío Gil, the undersecretary for science at the DOE says that Edwards’ scientific work “is a symbol of the pioneering spirit of US research”.

    “Her contributions to the Tevatron and the lab helped the US become a world leader in the study of elementary particles,” notes Gil. “We honour her legacy by naming this research centre after her as Fermilab continues shaping the next generation of research using [artificial intelligence], [machine learning] and quantum physics.”

    The post Fermilab opens new building dedicated to Tevatron pioneer Helen Edwards appeared first on Physics World.

    https://physicsworld.com/a/fermilab-opens-new-building-dedicated-to-tevatron-pioneer-helen-edwards/
    Michael Banks

    Memristors could measure a single quantum of resistance

    Devices could eliminate the strong magnetic fields currently required to define the standard unit of resistance

    The post Memristors could measure a single quantum of resistance appeared first on Physics World.

    A proposed new way of defining the standard unit of electrical resistance would do away with the need for strong magnetic fields when measuring it. The new technique is based on memristors, which are programmable resistors originally developed as building blocks for novel computing architectures, and its developers say it would considerably simplify the experimental apparatus required to measure a single quantum of resistance for some applications.

    Electrical resistance is a physical quantity that represents how much a material opposes the flow of electrical current. It is measured in ohms (Ω), and since 2019, when the base units of the International System of Units (SI) were most recently revised, the ohm has been defined in terms of the von Klitzing constant h/e2, where h and e are the Planck constant and the charge on an electron, respectively.

    To measure this resistance with high precision, scientists use the fact that the von Klitzing constant is related to the quantized change in the Hall resistance of a two-dimensional electron system (such as the one that forms in a semiconductor heterostructure) in the presence of a strong magnetic field. This quantized change in resistance is known as the quantum Hall effect (QHE), and in a material like GaAs or AlGaAs, it shows up at fields of around 10 Tesla. Generating such high fields typically requires a superconducting electromagnet, however.

    A completely different approach

    Researchers connected to a European project called MEMQuD are now advocating a completely different approach. Their idea is based on memristors, which are programmable resistors that “remember” their previous resistance state even after they have been switched off. This previous resistance state can be changed by applying a voltage or current.

    In the new work, a team led by Gianluca Milano of Italy’s Istituto Nazionale di Ricerca Metrologia (INRiM); Vitor Cabral of the Instituto Português da Qualidade; and Ilia Valov of the Institute of Electrochemistry and Energy Systems at the Bulgarian Academy of Sciences studied a device based on memristive nanoionics cells made from conducting filaments of silver. When an electrical field is applied to these filaments, their conductance changes in distinct, quantized steps.

    The MEMQuD team reports that the quantum conductance levels achieved in this set-up are precise enough to be exploited as intrinsic standard values. Indeed, a large inter-laboratory comparison confirmed that the values deviated by just -3.8% and 0.6% from the agreed SI values for the fundamental quantum of conductance, G0, and 2G0, respectively. The researchers attribute this precision to tight, atomic-level control over the morphology of the nanochannels responsible for quantum conductance effects, which they achieved by electrochemically polishing the silver filaments into the desired configuration.

    A national metrology institute condensed into a microchip

    The researchers say their results are building towards a concept known as an “NMI-in-a-chip” – that is, condensing the services of a national metrology institute into a microchip. “This could lead to measuring devices that have their resistance references built-in directly into the chip,” says Milano, “so doing away with complex measurements in laboratories and allowing for devices with zero-chain traceability – that is, those that do not require calibration since they have embedded intrinsic standards.”

    Yuma Okazaki of Japan’s National Institute of Advanced Industrial Science and Technology (AIST), who was not involved in this work, says that the new technique could indeed allow end users to directly access a quantum resistance standard.

    “Notably, this method can be demonstrated at room temperature and under ambient conditions, in contrast to conventional methods that require cryogenic and vacuum equipment, which is expensive and require a lot of electrical power,” Okazaki says. “If such a user-friendly quantum standard becomes more stable and its uncertainty is improved, it could lead to a new calibration scheme for ensuring the accuracy of electronics used in extreme environments, such as space or the deep ocean, where traditional quantum standards that rely on cryogenic and vacuum conditions cannot be readily used.”

    The MEMQuD researchers, who report their work in Nature Nanotechnology, now plan to explore ways to further decrease deviations from the agreed SI values for G0 and 2G0. These include better material engineering, an improved measurement protocol, and strategies for topologically protecting the memristor’s resistance.

    The post Memristors could measure a single quantum of resistance appeared first on Physics World.

    https://physicsworld.com/a/memristors-could-measure-a-single-quantum-of-resistance/
    Isabelle Dumé

    Oak Ridge Quantum Science Center prioritizes joined-up thinking, multidisciplinary impacts

    QSC to accelerate convergence of quantum computing and exascale high-performance computing

    The post Oak Ridge Quantum Science Center prioritizes joined-up thinking, multidisciplinary impacts appeared first on Physics World.

    Travis Humble is a research leader who’s thinking big, dreaming bold, yet laser-focused on operational delivery. The long-game? To translate advances in fundamental quantum science into a portfolio of enabling technologies that will fast-track the practical deployment of quantum computers for at-scale scientific, industrial and commercial applications.

    As director of the Quantum Science Center (QSC) at Oak Ridge National Laboratory (ORNL) in East Tennessee, Humble and his management team are well placed to transform that research vision into scientific, economic and societal upside. Funded to the tune of $115 million through its initial five-year programme (2020–25), QSC is one of five dedicated National Quantum Information Science Research Centers (NQISRC) within the US Department of Energy (DOE) National Laboratory system.

    Validation came in spades last month when, despite the current turbulence around US science funding, QSC was given follow-on DOE backing of $125 million over five years (2025–30) to create “a new scientific ecosystem” for fault-tolerant, quantum-accelerated high-performance computing (QHPC). In short, QSC will target the critical research needed to amplify the impact of quantum computing through its convergence with leadership-class exascale HPC systems.

    “Our priority in Phase II QSC is the creation of a common software ecosystem to host the compilers, programming libraries, simulators and debuggers needed to develop hybrid-aware algorithms and applications for QHPC,” explains Humble. Equally important, QSC researchers will develop and integrate new techniques in quantum error correction, fault-tolerant computing protocols and hybrid algorithms that combine leading-edge computing capabilities for pre- and post-processing of quantum programs. “These advances will optimize quantum circuit constructions and accelerate the most challenging computational tasks within scientific simulations,” Humble adds.

    Classical computing, quantum opportunity

    At the heart of the QSC programme sits ORNL’s leading-edge research infrastructure for classical HPC, a capability that includes Frontier, the first supercomputer to break the exascale barrier and still one of the world’s most powerful. On that foundation, QSC is committed to building QHPC architectures that take advantage of both quantum computers and exascale supercomputing to tackle all manner of scientific and industrial problems beyond the reach of today’s HPC systems alone.

    “Hybrid classical-quantum computing systems are the future,” says Humble. “With quantum computers connecting both physically and logically to existing HPC systems, we can forge a scalable path to integrate quantum technologies into our scientific infrastructure.”

    Frontier, a high-performance supercomputer
    Quantum acceleration ORNL’s current supercomputer, Frontier, was the first high-performance machine to break the exascale barrier. Plans are in motion for a next-generation supercomputer, Discovery, to come online at ORNL by 2028. (Courtesy: Carlos Jones/ORNL, US DOE)

    Industry partnerships are especially important in this regard. Working in collaboration with the likes of IonQ, Infleqtion and QuEra, QSC scientists are translating a range of computationally intensive scientific problems – quantum simulations of exotic matter, for example – onto the vendors’ quantum computing platforms, generating excellent results out the other side.

    “With our broad representation of industry partners,” notes Humble, “we will establish a common framework by which scientific end-users, software developers and hardware architects can collaboratively advance these tightly coupled, scalable hybrid computing systems.”

    It’s a co-development model that industry values greatly. “Reciprocity is key,” Humble adds. “At QSC, we get to validate that QHPC can address real-world research problems, while our industry partners gather user feedback to inform the ongoing design and optimization of their quantum hardware and software.”

    Quantum impact

    Innovation being what it is, quantum computing systems will continue to trend on an accelerating trajectory, with more qubits, enhanced fidelity, error correction and fault-tolerance key reference points on the development roadmap. Phase II QSC, for its part, will integrate five parallel research thrusts to advance the viability and uptake of QHPC technologies.

    The collaborative software effort, led by ORNL’s Vicente Leyton, will develop openQSE, an adaptive, end-to-end software ecosystem for QHPC systems and applications. Yigit Subasi from Los Alamos National Laboratory (LANL) will lead the hybrid algorithms thrust, which will design algorithms that combine conventional and quantum methods to solve challenging problems in the simulation of model materials.

    Meanwhile, the QHPC architectures thrust, under the guidance of ORNL’s Chris Zimmer, will co-design hybrid computing systems that integrate quantum computers with leading-edge HPC systems. The scientific applications thrust, led by LANL’s Andrew Sornberger, will develop and validate applications of quantum simulation to be implemented on prototype QHPC systems. Finally, ORNL’s Michael McGuire will lead the thrust to establish experimental baselines for quantum materials that ultimately validate QHPC simulations against real-world measurements.

    Longer term, ORNL is well placed to scale up the QHPC model. After all, the laboratory is credited with pioneering the hybrid supercomputing model that uses graphics processing units in addition to conventional central processing units (including the launch in 2012 of Titan, the first supercomputer of this type operating at over 10 petaFLOPS).

    “The priority for all the QSC partners,” notes Humble, “is to transition from this still-speculative research phase in quantum computing, while orchestrating the inevitable convergence between quantum technology, existing HPC capabilities and evolving scientific workflows.”

    Collaborate, coordinate, communicate

    Much like its NQISRC counterparts (which have also been allocated further DOE funding through 2030), QSC provides the “operational umbrella” for a broad-scope collaboration of more than 300 scientists and engineers from 20 partner institutions. With its own distinct set of research priorities, that collective activity cuts across other National Laboratories (Los Alamos and Pacific Northwest), universities (among them Berkeley, Cornell and Purdue) and businesses (including IBM and IQM) to chart an ambitious R&D pathway addressing quantum-state (qubit) resilience, controllability and, ultimately, the scalability of quantum technologies.

    “QSC is a multidisciplinary melting pot,” explains Humble, “and I would say, alongside all our scientific and engineering talent, it’s the pooled user facilities that we are able to exploit here at Oak Ridge and across our network of partners that gives us our ‘grand capability’ in quantum science [see box, “Unique user facilities unlock QSC opportunities”]. Certainly, when you have a common research infrastructure, orchestrated as part a unified initiative like QSC, then you can deliver powerful science that translates into real-world impacts.”

    Unique user facilities unlock QSC opportunities

    Stephen Streiffer tours the LINAC Tunnel at the Spallation Neutron Source
    Neutron insights ORNL director Stephen Streiffer tours the linear accelerator tunnel at the Spallation Neutron Source (SNS). QSC scientists are using the SNS to investigate entirely new classes of strongly correlated materials that demonstrate topological order and quantum entanglement. (Courtesy: Alonda Hines/ORNL, US DOE)

    Deconstructed, QSC’s Phase I remit (2020–25) spanned three dovetailing and cross-disciplinary research pathways: discovery and development of advanced materials for topological quantum computing (in which quantum information is stored in a stable topological state – or phase – of a physical system rather than the properties of individual particles or atoms); development of next-generation quantum sensors (to characterize topological states and support the search for dark matter); as well as quantum algorithms and simulations (for studies in fundamental physics and quantum chemistry).

    Underpinning that collective effort: ORNL’s unique array of scientific user facilities. A case in point is the Spallation Neutron Source (SNS), an accelerator-based neutron-scattering facility that enables a diverse programme of pure and applied research in the physical sciences, life sciences and engineering. QSC scientists, for example, are using SNS to investigate entirely new classes of strongly correlated materials that demonstrate topological order and quantum entanglement – properties that show great promise for quantum computing and quantum metrology applications.

    “The high-brightness neutrons at SNS give us access to this remarkable capability for materials characterization,” says Humble. “Using the SNS neutron beams, we can probe exotic materials, recover the neutrons that scatter off of them and, from the resultant signals, infer whether or not the materials exhibit quantum properties such as entanglement.”

    While SNS may be ORNL’s “big-ticket” user facility, the laboratory is also home to another high-end resource for quantum studies: the Center for Nanophase Material Science (CNMS), one of the DOE’s five national Nanoscience Research Centers, which offers QSC scientists access to specialist expertise and equipment for nanomaterials synthesis; materials and device characterization; as well as theory, modelling and simulation in nanoscale science and technology.

    Thanks to these co-located capabilities, QSC scientists pioneered another intriguing line of enquiry – one that will now be taken forward elsewhere within ORNL – by harnessing so-called quantum spin liquids, in which electron spins can become entangled with each other to demonstrate correlations over very large distances (relative to the size of individual atoms).

    In this way, it is possible to take materials that have been certified as quantum-entangled and use them to design new types of quantum devices with unique geometries – as well as connections to electrodes and other types of control systems – to unlock novel physics and exotic quantum behaviours. The long-term goal? Translation of quantum spin liquids into a novel qubit technology to store and process quantum information.

    SNS, CNMS and Oak Ridge Leadership Computing Facility (OLCF) are DOE Office of Science user facilities.

    When he’s not overseeing the technical direction of QSC, Humble is acutely attuned to the need for sustained and accessible messaging. The priority? To connect researchers across the collaboration – physicists, chemists, material scientists, quantum information scientists and engineers – as well as key external stakeholders within the DOE, government and industry.

    “In my experience,” he concludes, ”the ability of the QSC teams to communicate efficiently – to understand each other’s concepts and reasoning and to translate back and forth across disciplinary boundaries – remains fundamental to the success of our scientific endeavours.”

    Further information

    Listen to the Physics World podcast: Oak Ridge’s Quantum Science Center takes a multidisciplinary approach to developing quantum materials and technologies

    Scaling the talent pipeline in quantum science

    Quantum science graduate students and postdoctoral researchers present and discuss their work during a poster session
    The next generation Quantum science graduate students and postdoctoral researchers present and discuss their work during a poster session at the fifth annual QSC Summer School. Hosted at Purdue University in April this year, the school is one of several workforce development efforts supported by QSC. (Courtesy: Dave Mason/Purdue University)

    With an acknowledged shortage of skilled workers across the quantum supply chain, QSC is doing its bit to bolster the scientific and industrial workforce. Front-and-centre: the fifth annual QSC Summer School, which was held at Purdue University in April this year, hosting 130 graduate students (the largest cohort to date) through an intensive four-day training programme.

    The Summer School sits as part of a long-term QSC initiative to equip ambitious individuals with the specialist domain knowledge and skills needed to thrive in a quantum sector brimming with opportunity – whether that’s in scientific research or out in industry with hardware companies, software companies or, ultimately, the end-users of quantum technologies in key verticals like pharmaceuticals, finance and healthcare.

    “While PhD students and postdocs are integral to the QSC research effort, the Summer School exposes them to the fundamental ideas of quantum science elaborated by leading experts in the field,” notes Vivien Zapf, a condensed-matter physicist at Los Alamos National Laboratory who heads up QSC’s advanced characterization efforts.

    “It’s all about encouraging the collective conversation,” she adds, “with lots of opportunities for questions and knowledge exchange. Overall, our emphasis is very much on training up scientists and engineers to work across the diversity of disciplines needed to translate quantum technologies out of the lab into practical applications.”

    The programme isn’t for the faint-hearted, though. Student delegates kicked off this year’s proceedings with a half-day of introductory presentations on quantum materials, devices and algorithms. Next up: three and a half days of intensive lectures, panel discussions and poster sessions covering everything from entangled quantum networks to quantum simulations of superconducting qubits.

    Many of the Summer School’s sessions were also made available virtually on Purdue’s Quantum Coffeehouse Live Stream on YouTube – the streamed content reaching quantum learners across the US and further afield. Lecturers were drawn from the US National Laboratories, leading universities (such as Harvard and Northwestern) and the quantum technology sector (including experts from IBM, PsiQuantum, NVIDIA and JPMorganChase).

    The post Oak Ridge Quantum Science Center prioritizes joined-up thinking, multidisciplinary impacts appeared first on Physics World.

    https://physicsworld.com/a/oak-ridge-quantum-science-center-prioritizes-joined-up-thinking-multidisciplinary-impacts/
    No Author

    So you want to install a wind turbine? Here’s what you need to know

    Janina Moereke discovers the practicalities of installing wind turbines in a forest

    The post So you want to install a wind turbine? Here’s what you need to know appeared first on Physics World.

    As a physicist in industry, I spend my days developing new types of photovoltaic (PV) panels. But I’m also keen to do something for the transition to green energy outside work, which is why I recently installed two PV panels on the balcony of my flat in Munich. Fitting them was great fun – and I can now enjoy sunny days even more knowing that each panel is generating electricity.

    However, the panels, which each have a peak power of 440 W, don’t cover all my electricity needs, which prompted me to take an interest in a plan to build six wind turbines in a forest near me on the outskirts of Munich. Curious about the project, I particularly wanted to find out when the turbines will start generating electricity for the grid. So when I heard that a weekend cycle tour of the site was being organized to showcase it to local residents, I grabbed my bike and joined in.

    As we cycle, I discover that the project – located in Forstenrieder Park – is the joint effort of four local councils and two “citizen-energy” groups, who’ve worked together for the last five years to plan and start building the six turbines. Each tower will be 166 m high and the rotor blades will be 80 m long, with the plan being for them to start operating in 2027.

    I’ve never thought of Munich as a particularly windy city, but at the height at which the blades operate, there’s always a steady, reliable flow of wind

    I’ve never thought of Munich as a particularly windy city. But tour leader Dieter Maier, who’s a climate adviser to Neuried council, explains that at the height at which the blades operate, there’s always a steady, reliable flow of wind. In fact, each turbine has a designed power output of 6.5 MW and will deliver a total of 10 GWh in energy over the course of a year.

    Practical questions

    Cycling around, I’m excited to think that a single turbine could end up providing the entire electricity demand for Neuried. But installing wind turbines involves much more than just the technicalities of generating electricity. How do you connect the turbines to the grid? How do you ensure planes don’t fly into the turbines? What about wildlife conservation and biodiversity?

    At one point of our tour, we cycle round a 90-degree bend in the forest and I wonder how a huge, 80 m-long blade will be transported round that kind of tight angle? Trees will almost certainly have to be felled to get the blade in place, which sounds questionable for a supposedly green project. Fortunately, project leaders have been working with the local forest manager and conservationists, finding ways to help improve the local biodiversity despite the loss of trees.

    As a representative of BUND (one of Germany’s biggest conservation charities) explains on the tour, a natural, or “unmanaged”, forest consists of a mix of areas with a higher or lower density of trees. But Forstenrieder Park has been a managed forest for well over a century and is mostly thick with trees. Clearing trees for the turbines will therefore allow conservationists to grow more of the bushes and plants that currently struggle to find space to flourish.

    Small group of bikes at the edge of a large clearing in a forest
    Cut and cover Trees in Forstenrieder Park have had to be chopped down to provide room for new wind turbines to be installed, but the open space will let conservationists grow plants and bushes to boost biodiversity. (Courtesy: Janina Moereke)

    To avoid endangering birds and bats native to this forest, meanwhile, the turbines will be turned off when the animals are most active, which coincidentally corresponds to low wind periods in Munich. Insurance costs have to be factored in too. Thankfully, it’s quite unlikely that a turbine will burn down or get ice all over its blades, which means liability insurance costs are low. But vandalism is an ever-present worry.

    In fact, at the end of our bike tour, we’re taken to a local wind turbine that is already up and running about 13 km further south of Forstenrieder Park. This turbine, I’m disappointed to discover, was vandalized back in 2024, which led to it being fenced off and video surveillance cameras being installed.

    But for all the difficulties, I’m excited by the prospect of the wind turbines supporting the local energy needs. I can’t wait for the day when I’m on my balcony, solar panels at my side, sipping a cup of tea made with water boiled by electricity generated by the rotor blades I can see turning round and round on the horizon.

    The post So you want to install a wind turbine? Here’s what you need to know appeared first on Physics World.

    https://physicsworld.com/a/so-you-want-to-install-a-wind-turbine-heres-what-you-need-to-know/
    No Author

    Galactic gamma rays could point to dark matter

    Spectrum from the Milky Way’s halo matches WIMP annihilation

    The post Galactic gamma rays could point to dark matter appeared first on Physics World.

    Fermi telescope data
    Excess radiation Gamma-ray intensity map excluding components other than the halo, spanning approximately 100° in the direction of the centre of the Milky Way. The blank horizontal bar is the galactic plane area, which was excluded from the analysis to avoid strong astrophysical radiation. (Courtesy: Tomonori Totani/The University of Tokyo)

    Gamma rays emitted from the halo of the Milky Way could be produced by hypothetical dark-matter particles. That is the conclusion of an astronomer in Japan who has analysed data from NASA’s Fermi Gamma-ray Space Telescope. The energy spectrum of the emission is what would be expected from the annihilation of particles called WIMPs. If this can be verified, it would mark the first observation of dark matter via electromagnetic radiation.

    Since the 1930s astronomers have known that there is something odd about galaxies, galaxy clusters and larger structures in the universe. The problem is that there is not nearly enough visible matter in these objects to explain their dynamics and structure. A rotating galaxy, for example, should be flinging out its stars because it does not have enough self-gravitation to hold itself together.

    Today, the most popular solution to this conundrum is the existence of a hypothetical substance called dark matter. Dark-matter particles would have mass and interact with each other and normal matter via the gravitational force, gluing rotating galaxies together. However, the fact that we have never observed dark matter directly means that the particles must rarely, if ever, interact via the other three forces.

    Annihilating WIMPs

    The weakly interacting massive particle (WIMP) is a dark-matter candidate that interacts via the weak nuclear force (or a similarly weak force). As a result of this interaction, pairs of WIMPs are expected to occasionally annihilate to create high-energy gamma rays and other particles. If this is true, dense areas of the universe such as galaxies should be sources of these gamma rays.

    Now, Tomonori Totani of the University of Tokyo has analysed data from the Fermi telescope  and identified an excess of gamma rays emanating from the halo of the Milky Way. What is more, Totani’s analysis suggests that the energy spectrum of the excess radiation (from about 10−100 GeV) is consistent with hypothetical WIMP annihilation processes.

    “If this is correct, to the extent of my knowledge, it would mark the first time humanity has ‘seen’ dark matter,” says Totani. “This signifies a major development in astronomy and physics,” he adds.

    While Totani is confident of his analysis, his conclusion must be verified independently. Furthermore, work will be needed to rule out conventional astrophysical sources of the excess radiation.

    Catherine Heymans, who is Astronomer Royal for Scotland told Physics World, “I think it’s a really nice piece of work, and exactly what should be happening with the Fermi data”.  The research is described in Journal of Cosmology and Astroparticle Physics. Heymans describes Totani’s paper as “well written and thorough”.

    The post Galactic gamma rays could point to dark matter appeared first on Physics World.

    https://physicsworld.com/a/galactic-gamma-rays-could-point-to-dark-matter/
    Hamish Johnston

    Simple feedback mechanism keeps flapping flyers stable when hovering

    Discovery could improve the performance of hovering robots and even artificial pollinators

    The post Simple feedback mechanism keeps flapping flyers stable when hovering appeared first on Physics World.

    Researchers in the US have shed new light on the puzzling and complex flight physics of creatures such as hummingbirds, bumblebees and dragonflies that flap their wings to hover in place. According to an interdisciplinary team at the University of Cincinnati, the mechanism these animals deploy can be described by a very simple, computationally basic, stable and natural feedback mechanism that operates in real time. The work could aid the development of hovering robots, including those that could act as artificial pollinators for crops.

    If you’ve ever watched a flapping insect or hummingbird hover in place – often while engaged in other activities such as feeding or even mating – you’ll appreciate how remarkable they are. To stay aloft and stable, these animals must constantly sense their position and motion and make corresponding adjustments to their wing flaps.

    Feedback mechanism relies on two main components

    Biophysicists have previously put forward many highly complex explanations for how they do this, but according to the Cincinnati team of Sameh Eisa and Ahmed Elgohary, some of this complexity is not necessary. Earlier this year, the pair developed their own mathematical and control theory based on a mechanism they call “extremum seeking for vibrational stabilization”.

    Eisa describes this mechanism as “very natural” because it relies on just two main components. The first is the wing flapping motion itself, which he says is “naturally built in” for flapping creatures that use it to propel themselves. The second is a simple feedback mechanism involving sensations and measurements related to the altitude at which the creatures aim to stabilize their hovering.

    The general principle, he continues, is that a system (in this case an insect or hummingbird) can steer itself towards a stable position by continuously adjusting a high-amplitude, high-frequency input control or signal (in this case, a flapping wing action). “This adjustment is simply based on the feedback of measurement (the insects’ perceptions) and stabilization (hovering) occurs when the system optimizes what it is measuring,” he says.

    As well as being relatively easy to describe, Eisa tells Physics World that this mechanism is biologically plausible and computationally basic, dramatically simplifying the physics of hovering. “It is also categorically different from all available results and explanations in the literature for how stable hovering by insects and hummingbirds can be achieved,” he adds.

    Researchers at dinner
    The researchers and colleagues. (Courtesy: S Eisa)

    Interdisciplinary work

    In the latest study, which is detailed in Physical Review E, the researchers compared their simulation results to reported biological data on a hummingbird and five flapping insects (a bumblebee, a cranefly, a dragonfly, a hawkmoth and a hoverfly). They found that their simulation fit the data very closely. They also ran an experiment on a flapping, light-sensing robot and observed that it behaved like a moth: it elevated itself to the level of the light source and then stabilized its hovering motion.

    Eisa says he has always been fascinated by such optimized biological behaviours. “This is especially true for flyers, where mistakes in execution could potentially mean death,” he says. “The physics behind the way they do it is intriguing and it probably needs elegant and sophisticated mathematics to be described. However, the hovering creatures appear to be doing this very simply and I found discovering the secret of this puzzle very interesting and exciting.”

    Eisa adds that this element of the work ended up being very interdisciplinary, and both his own PhD in applied mathematics and the aerospace engineering background of Elgohary came in very useful. “We also benefited from lengthy discussions with a biologist colleague who was a reviewer of our paper,” Eisa says. “Luckily, they recognized the value of our proposed technique and ended up providing us with very valuable inputs.”

    Eisa thinks the work could open up new lines of research in several areas of science and engineering. “For example, it opens up new ideas in neuroscience and animal sensory mechanisms and could almost certainly be applied to the development of airborne robotics and perhaps even artificial pollinators,” he says. “The latter might come in useful in the future given the high rate of death many species of pollinating insects are encountering today.”

    The post Simple feedback mechanism keeps flapping flyers stable when hovering appeared first on Physics World.

    https://physicsworld.com/a/simple-feedback-mechanism-keeps-flapping-flyers-stable-when-hovering/
    Isabelle Dumé

    Building a quantum future using topological phases of matter and error correction

    Tim Hsieh of the Perimeter Institute is our podcast guest

    The post Building a quantum future using topological phases of matter and error correction appeared first on Physics World.

    This episode of the Physics World Weekly podcast features Tim Hsieh of Canada’s Perimeter Institute for Theoretical Physics. We explore some of today’s hottest topics in quantum science and technology – including topological phases of matter; quantum error correction and quantum simulation.

    Our conversation begins with an exploration of the quirky properties quantum matter and how these can be exploited to create quantum technologies. We look at the challenges that must be overcome to create large-scale quantum computers; and Hsieh reveals which problem he would solve first if he had access to a powerful quantum processor.

    This interview was recorded earlier this autumn when I had the pleasure of visiting the Perimeter Institute and speaking to four physicists about their research. This is the third of those conversations to appear on the podcast.

    The first interview in this series from the Perimeter Institute was with Javier Toledo-Marín, “Quantum computing and AI join forces for particle physics”; and the second was with Bianca Dittrich, “Quantum gravity: we explore spin foams and other potential solutions to this enduring challenge“.

    APS logo

     

    This episode is supported by the APS Global Physics Summit, which takes place on 15–20 March, 2026, in Denver, Colorado, and online.

    The post Building a quantum future using topological phases of matter and error correction appeared first on Physics World.

    https://physicsworld.com/a/building-a-quantum-future-using-topological-phases-of-matter-and-error-correction/
    Hamish Johnston

    Generative AI model detects blood cell abnormalities

    The CytoDiffusion classifier analyses the shape and structure of blood cells to detect abnormalities that may indicate blood disorders

    The post Generative AI model detects blood cell abnormalities appeared first on Physics World.

    Blood cell images
    Generative classification The CytoDiffusion classifier accurately identifies a wide range of blood cell appearances and detects unusual or rare blood cells that may indicate disease. The diagonal grid elements display original images of each cell type, while the off-diagonal elements show heat maps that provide insight into the model’s decision-making rationale. (Courtesy: Simon Deltadahl)

    The shape and structure of blood cells provide vital indicators for diagnosis and management of blood disease and disorders. Recognizing subtle differences in the appearance of cells under a microscope, however, requires the skills of experts with years of training, motivating researchers to investigate whether artificial intelligence (AI) could help automate this onerous task. A UK-led research team has now developed a generative AI-based model, known as CytoDiffusion, that characterizes blood cell morphology with greater accuracy and reliability than human experts.

    Conventional discriminative machine learning models can match human performance at classifying cells in blood samples into predefined classes. But discriminative models, which learn to recognise cell images based on expert labels, struggle with never-before-seen cell types and images from differing microscopes and staining techniques.

    To address these shortfalls, the team – headed up at the University of Cambridge, University College London and Queen Mary University of London – created CytoDiffusion around a diffusion-based generative AI classifier. Rather than just learning to separate cell categories, CytoDiffusion models the full range of blood cell morphologies to provide accurate classification with robust anomaly detection.

    “Our approach is motivated by the desire to achieve a model with superhuman fidelity, flexibility and metacognitive awareness that can capture the distribution of all possible morphological appearances,” the researchers write.

    Authenticity and accuracy

    For AI-based analysis to be adopted in the clinic, it’s essential that users trust a model’s learned representations. To assess whether CytoDiffusion could effectively capture the distribution of blood cell images, the team used it to generate synthetic blood cell images. Analysis by experienced haematologists revealed that these synthetic images were near-indistinguishable from genuine images, showing that CytoDiffusion genuinely learns the morphological distribution of blood cells rather than using artefactual shortcuts.

    The researchers used multiple datasets to develop and evaluate their diffusion classifier, including CytoData, a custom dataset containing more than half a million anonymized cell images from almost 3000 blood smear slides. In standard classification tasks across these datasets, CytoDiffusion achieved state-of-the-art performance, matching or exceeding the capabilities of traditional discriminative models.

    Effective diagnosis from blood smear samples also requires the ability to detect rare or previously unseen cell types. The researchers evaluated CytoDiffusion’s ability to detect blast cells (immature blood cells) in the test datasets. Blast cells are associated with blood malignancies such as leukaemia, and high detection sensitivity is essential to minimize false negatives.

    In one dataset, CytoDiffusion detected blast cells with sensitivity and specificity of 0.905 and 0.962, respectively. In contrast, a discriminative model exhibited a poor sensitivity of 0.281. In datasets with erythroblasts as the abnormal cells, CytoDiffusion again outperformed the discriminative model, demonstrating that it can detect abnormal cell types not present in its training data, with the high sensitivity required for clinical applications.

    Robust model

    It’s important that a classification model is robust to different imaging conditions and can function with sparse training data, as commonly found in clinical applications. When trained and tested on diverse image datasets (different hospitals, microscopes and staining procedures), CytoDiffusion achieved state-of-the-art accuracy in all cases. Likewise, after training on limited subsets of 10, 20 and 50 images per class, CytoDiffusion consistently outperformed discriminative models, particularly in the most data-scarce conditions.

    Another essential feature of clinical classification tasks, whether performed by a human or an algorithm, is knowing the uncertainty in the final decision. The researchers developed a framework for evaluating uncertainty and showed that CytoDiffusion produced superior uncertainty estimates to human experts. With uncertainty quantified, cases with high certainty could be processed automatically, with uncertain cases flagged for human review.

    “When we tested its accuracy, the system was slightly better than humans,” says first author Simon Deltadahl from the University of Cambridge in a press statement. “But where it really stood out was in knowing when it was uncertain. Our model would never say it was certain and then be wrong, but that is something that humans sometimes do.”

    Finally, the team demonstrated CytoDiffusion’s ability to create heat maps highlighting regions that would need to change for an image to be reclassified. This feature provides insight into the model’s decision-making process and shows that it understands subtle differences between similar cell types. Such transparency is essential for clinical deployment of AI, making models more trustworthy as practitioners can verify that classifications are based on legitimate morphological features.

    “The true value of healthcare AI lies not in approximating human expertise at lower cost, but in enabling greater diagnostic, prognostic and prescriptive power than either experts or simple statistical models can achieve,” adds co-senior author Parashkev Nachev from University College London.

    CytoDiffusion is described in Nature Machine Intelligence.

    The post Generative AI model detects blood cell abnormalities appeared first on Physics World.

    https://physicsworld.com/a/generative-ai-model-detects-blood-cell-abnormalities/
    Tami Freeman

    Light pollution from satellite mega-constellations threaten space-based observations

    Study finds 96% of images from planned telescopes could be compromised

    The post Light pollution from satellite mega-constellations threaten space-based observations appeared first on Physics World.

    Almost every image that will be taken by future space observatories in low-Earth orbit could be tainted due to light contamination from satellites. That is according to a new analysis from researchers at NASA, which stresses that light pollution from satellites orbiting Earth must be reduced to guarantee astronomical research is not affected.

    The number of satellites orbiting Earth has increased from about 2000 in 2019 to 15 000 today. Many of these are part of so-called mega-constellations that provide services such as Internet coverage around the world, including in areas that were previously unable to access it. Examples of such constellations include SpaceX’s Starlink as well as Amazon’s Kuiper and Eutelsat’s OneWeb.

    Many of these mega-constellations share the same space as space-based observatories such as NASA’s Hubble Space Telescope. This means that the telescopes can capture streaks of reflected light from the satellites that render the images or data completely unusable for research purposes. That is despite anti-reflective coating that is applied to some newer satellites in SpaceX’s Starlink constellation, for example.

    Previous work has explored the impact of such satellites constellations on ground-based astronomy, both optical and radioastronomy. Yet their impact on telescopes in space has been overlooked.

    To find out more, Alejandro Borlaff from NASA’s Ames Research Center, and colleagues simulated the view of four space-based telescopes: Hubble and the near-infrared observatory SPHEREx, which launched in 2025, as well at the European Space Agency’s proposed near-infrared ARRAKIHS mission and China’s planned Xuntian telescopes.

    These observatories are, or will be placed, between 400 and 800 km from the Earth’s surface.

    The authors found that if the population of mega-constellation satellites grows to the 56 000 that is projected by the end of the decade, it would contaminate about 39.6% of Hubble’s images and 96% of images from the other three telescopes.

    Borlaff and colleagues predict that the average number of satellites observed per exposure would be 2.14 for Hubble, 5.64 for SPHEREx, 69 for ARRAKIHS, and 92 for Xuntian.

    The authors note that one solution could be to deploy satellites at lower orbits than the telescopes operate, which would make them about four magnitudes dimmer. The downside is that emissions from these lower satellites could have implications for Earth’s ozone layer.

    An ‘urgent need for dialogue’

    Katherine Courtney, chair of the steering board for the Global Network on Sustainability in Space, says that without astronomy, the modern space economy “simply wouldn’t exist”.

    “The space industry owes its understanding of orbital mechanics, and much of the technology development that has unlocked commercial opportunities for satellite operators, to astronomy,” she says. “The burgeoning growth of the satellite population brings many benefits to life on Earth, but the consequences for the future of astronomy must be taken into consideration.”

    Courtney adds that there is now “an urgent need for greater dialogue and collaboration between astronomers and satellite operators to mitigate those impacts and find innovative ways for commercial and scientific operations to co-exist in space.”

    • Katherine Courtney, chairs the Global Network on Sustainability in Space, and Alice Gorman from Flinders University in Adelaide, Australia, appeared on a Physics World Live panel discussion about the impact of space debris that was held on 10 November. A recording of the event is available here.

    The post Light pollution from satellite mega-constellations threaten space-based observations appeared first on Physics World.

    https://physicsworld.com/a/light-pollution-from-satellite-mega-constellations-threaten-space-based-observations/
    Michael Banks

    Physicists use a radioactive molecule’s own electrons to probe its internal structure

    Work on radium monofluoride could shed light on the asymmetry of matter and antimatter in the universe

    The post Physicists use a radioactive molecule’s own electrons to probe its internal structure appeared first on Physics World.

    Physicists have obtained the first detailed picture of the internal structure of radium monofluoride (RaF) thanks to the molecule’s own electrons, which penetrated the nucleus of the molecule and interacted with its protons and neutrons. This behaviour is known as the Bohr-Weisskopf effect, and study co-leader Shane Wilkins says that this marks the first time it has been observed in a molecule. The measurements themselves, he adds, are an important step towards testing for nuclear symmetry violation, which might explain why our universe contains much more matter than antimatter.

    RaF contains the radioactive isotope 225Ra, which is not easy to make, let alone measure. Producing it requires a large accelerator facility at high temperature and high velocity, and it is only available in tiny quantities (less than a nanogram in total) for short periods (it has a nuclear half-life of around 15 days).

    “This imposes significant challenges compared to the study of stable molecules, as we need extremely selective and sensitive techniques in order to elucidate the structure of molecules containing 225Ra,” says Wilkins, who performed the measurements as a member of Ronald Fernando Garcia Ruiz’s research group at the Massachusetts Institute of Technology (MIT), US.

    The team chose RaF despite these difficulties because theory predicts that it is particularly sensitive to small nuclear effects that break the symmetries of nature. “This is because, unlike most atomic nuclei, the radium atom’s nucleus is octupole deformed, which basically means it has a pear shape,” explains the study’s other co-leader, Silviu-Marian Udrescu.

    Electrons inside the nucleus

    In their study, which is detailed in Science, the MIT team and colleagues at CERN, the University of Manchester, UK and KU Leuven in the Netherlands focused on RaF’s hyperfine structure. This structure arises from interactions between nuclear and electron spins, and studying it can reveal valuable clues about the nucleus. For example, the nuclear magnetic dipole moment can provide information on how protons and neutrons are distributed inside the nucleus.

    In most experiments, physicists treat electron-nucleus interactions as taking place at (relatively) long ranges. With RaF, that’s not the case. Udrescu describes the radium atom’s electrons as being “squeezed” within the molecule, which increases the probability that they will interact with, and penetrate, the radium nucleus. This behaviour manifests itself as a slight shift in the energy levels of the radium atom’s electrons, and the team’s precision measurements – combined with state-of-the-art molecular structure calculations – confirm that this is indeed what happens.

    “We see a clear breakdown of this [long-range interactions] picture because the electrons spend a significant amount of time within the nucleus itself due to the special properties of this radium molecule,” Wilkins explains. “The electrons thus act as highly sensitive probes to study phenomena inside the nucleus.”

    Searching for violations of fundamental symmetries

    According to Udrescu, the team’s work “lays the foundations for future experiments that use this molecule to investigate nuclear symmetry violation and test the validity of theories that go beyond the Standard Model of particle physics.” In this model, each of the matter particles we see around us – from baryons like protons to leptons such as electrons – should have a corresponding antiparticle that is identical in every way apart from its charge and magnetic properties (which are reversed).

    The problem is that the Standard Model predicts that the Big Bang that formed our universe nearly 14 billion years ago should have generated equal amounts of antimatter and matter – yet measurements and observations made today reveal an almost entirely matter-based universe. Subtler differences between matter particles and their antimatter counterparts might explain why the former prevailed, so by searching for these differences, physicists hope to explain antimatter-matter asymmetry.

    Wilkins says the team’s work will be important for future such searches in species like RaF. Indeed, Wilkins, who is now at Michigan State University’s Facility for Rare Isotope Beams (FRIB), is building a new setup to cool and slow beams of radioactive molecules to enable higher-precision spectroscopy of species relevant to nuclear structure, fundamental symmetries and astrophysics. His long-term goal, together with other members of the RaX collaboration (which includes FRIB and the MIT team as well as researchers at Harvard University and the California Institute of Technology), is to implement advanced laser-based techniques using radium-containing molecules.

    The post Physicists use a radioactive molecule’s own electrons to probe its internal structure appeared first on Physics World.

    https://physicsworld.com/a/physicists-use-a-radioactive-molecules-own-electrons-to-probe-its-internal-structure/
    Isabelle Dumé

    Quantum-scale thermodynamics offers a tighter definition of entropy

    New formulation sheds light on the three-level maser

    The post Quantum-scale thermodynamics offers a tighter definition of entropy appeared first on Physics World.

    A new, microscopic formulation of the second law of thermodynamics for coherently driven quantum systems has been proposed by researchers in Switzerland and Germany. The researchers applied their formulation to several canonical quantum systems, such as a three-level maser. They believe the result provides a tighter definition of entropy in such systems, and could form a basis for further exploration.

    In any physical process, the first law of thermodynamics says that the total energy must always be conserved, with some converted to useful work and the remainder dissipated as heat. The second law of thermodynamics says that, in any allowed process, the total amount of heat (the entropy) must always increase.

    “I like to think of work being mediated by degrees of freedom that we control and heat being mediated by degrees of freedom that we cannot control,” explains theoretical physicist Patrick Potts of the University of Basel in Switzerland. “In the macroscopic scenario, for example, work would be performed by some piston – we can move it.” The heat, meanwhile, goes into modes such as phonons generated by friction.

    Murky at small scales

    This distinction, however, becomes murky at small scales: “Once you go microscopic everything’s microscopic, so it becomes much more difficult to say ‘what is it that that you control – where is the work mediated – and what is it that you cannot control?’,” says Potts.

    Potts and colleagues in Basel and at RWTH Aachen University in Germany examined the case of optical cavities driven by laser light, systems that can do work: “If you think of a laser as being able to promote a system from a ground state to an excited state, that’s very important to what’s being done in quantum computers, for example,” says Potts. “If you rotate a qubit, you’re doing exactly that.”

    The light interacts with the cavity and makes an arbitrary number of bounces before leaking out. This emergent light is traditionally treated as heat in quantum simulations. However, it can still be partially coherent – if the cavity is empty, it can be just as coherent as the incoming light and can do just as much work.

    In 2020, quantum optician Alexia Auffèves of Université Grenoble Alpes in France and colleagues noted that the coherent component of the light exiting a cavity could potentially do work. In the new study, the researchers embedded this in a consistent thermodynamic framework. They studied several examples and formulated physically consistent laws of thermodynamics.

    In particular, they looked at the three-level maser, which is a canonical example of a quantum heat engine. However, it has generally been modelled semi-classically by assuming that the cavity contains a macroscopic electromagnetic field.

    Work vanishes

    “The old description will tell you that you put energy into this macroscopic field and that is work,” says Potts, “But once you describe the cavity quantum mechanically using the old framework then – poof! – the work is gone…Putting energy into the light field is no longer considered work, and whatever leaves the cavity is considered heat.”

    The researchers new thermodynamic treatment allows them to treat the cavity quantum mechanically and to parametrize the minimum degree of entropy in the radiation that emerges – how much radiation must be converted to uncontrolled degrees of freedom that can do no useful work and how much can remain coherent.

    The researchers are now applying their formalism to study thermodynamic uncertainty relations as an extension of the traditional second law of thermodynamics. “It’s actually a trade-off between three things – not just efficiency and power, but fluctuations also play a role,” says Potts. “So the more fluctuations you allow for, the higher you can get the efficiency and the power at the same time. These three things are very interesting to look at with this new formalism because these thermodynamic uncertainty relations hold for classical systems, but not for quantum systems.”

    “This [work] fits very well into a question that has been heavily discussed for a long time in the quantum thermodynamics community, which is how to properly define work and how to  properly define useful resources,” says quantum theorist Federico Cerisola of the UK’s University of Exeter. “In particular, they very convincingly argue that, in the particular family of experiments they’re describing, there are resources that have been ignored in the past when using more standard approaches that can still be used for something useful.”

    Cerisola says that, in his view, the logical next step is to propose a system – ideally one that can be implemented experimentally – in which radiation that would traditionally have been considered waste actually does useful work.

    The research is described in Physical Review Letters.  

    The post Quantum-scale thermodynamics offers a tighter definition of entropy appeared first on Physics World.

    https://physicsworld.com/a/quantum-scale-thermodynamics-offers-a-tighter-definition-of-entropy/
    No Author

    Bring gravity back down to Earth: from giraffes and tree snakes to ‘squishy’ space–time

    Emma Chapman reviews Crush: Close Encounters with Gravity by James Riordon

    The post Bring gravity back down to Earth: from giraffes and tree snakes to ‘squishy’ space–time appeared first on Physics World.

    When I was five years old, my family moved into a 1930s semi-detached house with a long strip of garden. At the end of the garden was a miniature orchard of eight apple trees the previous owners had planted – and it was there that I, much like another significantly more famous physicist, learned an important lesson about gravity.

    As I read in the shade of the trees, an apple would sometimes fall with a satisfying thunk into the soft grass beside me. Less satisfyingly, they sometimes landed on my legs, or even my head – and the big cooking apples really hurt. I soon took to sitting on old wooden pallets crudely wedged among the higher branches. It was not comfortable, but at least I could return indoors without bruises.

    The effects of gravity become common sense so early in life that we rarely stop to think about them past childhood. In his new book Crush: Close Encounters with Gravity, James Riordon has decided to take us back to the basics of this most fundamental of forces. Indeed, he explores an impressively wide range of topics – from why we dream of falling and why giraffes should not exist (but do), to how black holes form and the existence of “Planet 9”.

    Riordon, a physicist turned science writer, makes for a deeply engaging author. He is not afraid to put himself into the story, introducing difficult concepts through personal experience and explaining them with the help of everything including the kitchen sink, which in his hands becomes an analogue for a black hole.

    Gravity as a subject can easily be both too familiar and too challenging. In Riordon’s words, “Things with mass attract each other. That’s really all there is to Newtonian gravity.” While Albert Einstein’s theory of general relativity, by contrast, is so intricate that it takes years of university-level study to truly master. Riordon avoids both pitfalls: he manages to make the simple fascinating again, and the complex understandable.

    He provides captivating insights into how gravity has shaped the animal kingdom, a perspective I had never much considered. Did you know that tree snakes have their hearts positioned closer to their heads than their land-based cousins? I certainly didn’t. The higher placement ensures a steady blood flow to the brain, even when the snake is climbing vertically. It is one of many examples that make you look again at the natural world with fresh eyes.

    Riordon’s treatment of gravity in Einstein’s abstract space–time is equally impressive, perhaps unsurprisingly, as his previous books include Very Easy Relativity and Relatively Easy Relativity. Riordon takes a careful, patient approach – though I have never before heard general relativity reduced to “space–time is squishy”. But why not? The phrase sticks and gives us a handhold as we scale the complications of the theory. For those who want to extend the challenge, a mathematical background to the theory is provided in an appendix, and every chapter is well referenced and accompanied with suggestions for further reading.

    If anything, I found myself wanting more examples of gravity as experienced by humans and animals on Earth, as opposed to in the context of the astronomical realm. I found these down-to-earth chapters the most fascinating: they formed a bridge between the vast and the local, reminding us that the same force that governs the orbits of galaxies also brings an apple to the ground. This may be a reaction only felt by astronomers like me, who already spend their days looking upward. I can easily see how the balance Riordon chose is necessary for someone without that background, and Einstein’s gravity does require galactic scales to appreciate, after all.

    Crush is a generally uncomplicated and pleasurable read. The anecdotes can sometimes be a little long-winded and there are parts of the book that are not without challenge. But it is pitched perfectly for the curious general reader and even for those dipping their toes into popular science for the first time. I can imagine an enthusiastic A-level student devouring it; it is exactly the kind of book I would have loved at that age. Even if some of it would have gone over my head, Riordon’s enthusiasm and gift for storytelling would have kept me more than interested, as I sat up on that pallet in my favourite apple tree.

    I left that house, and that tree, a long time ago, but just a few miles down the road from where I live now stands another, far more famous apple tree. In the garden of Woolsthorpe Manor near Grantham, Newton is said to have watched an apple fall. From that small event, he began to ask the questions that reshaped his and our understanding of the universe. Whether or not the story is true hardly matters – Newton was constantly inspired by the natural world, so it isn’t improbable, and that apple tree remains a potent symbol of curiosity and insight.

    “[Newton] could tell us that an apple falls, and how quickly it will do it. As for the question of why it falls, that took Einstein to answer,” writes Riordon. Crush is a crisp and fresh tour through a continuum from orchards to observatories, showing that every planetary orbit, pulse of starlight and even every apple fall is part of the same wondrous story.

    • 2025 MIT Press 288pp £27hb

    The post Bring gravity back down to Earth: from giraffes and tree snakes to ‘squishy’ space–time appeared first on Physics World.

    https://physicsworld.com/a/bring-gravity-back-down-to-earth-from-giraffes-and-tree-snakes-to-squishy-space-time/
    No Author

    Ice XXI appears in a diamond anvil cell

    Previously unknown ice phase exists at room temperature and pressures of 2 GPa

    The post Ice XXI appears in a diamond anvil cell appeared first on Physics World.

    A new phase of water ice, dubbed ice XXI, has been discovered by researchers working at the European XFEL and PETRA III facilities. The ice, which exists at room temperature and is structurally distinct from all previously observed phases of ice, was produced by rapidly compressing water to high pressures of 2 GPa. The finding could shed light on how different ice phases form at high pressures, including on icy moons and planets.

    On Earth, ice can take many forms, and its properties depend strongly on its structure. The main type of naturally-occurring ice is hexagonal ice (Ih), so-called because the water molecules arrange themselves in a hexagonal lattice (this is the reason why snowflakes have six-fold symmetry). However, under certain conditions – usually involving very high pressures and low temperatures – ice can take on other structures. Indeed, 20 different forms of ice have been identified so far, denoted by roman numerals (ice I, II, III and so on up to ice XX).

    Pressures of up to 2 GPa allow ice to form even at room temperature

    Researchers from the Korea Research Institute of Standards and Science (KRISS) have now produced a 21st form of ice by applying pressures of up to two gigapascals. Such high pressures are roughly 20 000 times higher than normal air pressure at sea level, and they allow ice to form even at room temperature – albeit only within a device known as a dynamic diamond anvil cell (dDAC) that is capable of producing such extremely high pressures.

    “In this special pressure cell, samples are squeezed between the tips of two opposing diamond anvils and can be compressed along a predefined pressure pathway,” explains Cornelius Strohm, a member of the DESY HIBEF team that set up the experiment using the High Energy Density (HED) instrument at the European XFEL.

    Much more tightly packed molecules

    The structure of ice XXI is different from all previously observed phases of ice because its molecules are much more tightly packed. This gives it the largest unit cell volume of all currently known types of ice, says KRISS scientist Geun Woo Lee. It is also metastable, meaning that it can exist even though another form of ice (in this case ice VI) would be more stable under the conditions in the experiment.

    “This rapid compression of water allows it to remain liquid up to higher pressures, where it should have already crystallized to ice VI,” explains Lee. “Ice VI is an especially intriguing phase, thought to be present in the interior of icy moons such as Titan and Ganymede. Its highly distorted structure may allow complex transition pathways that lead to metastable ice phases.”

    Ice XXI has a body-centred tetragonal crystal structure

    To study how the new ice sample formed, the researchers rapidly compressed and decompressed it over 1000 times in the diamond anvil cell while imaging it every microsecond using the European XFEL, which produces megahertz frequency X-ray pulses at extremely high rates. They found that the liquid water crystallizes into different structures depending on how supercompressed it is.

    The KRISS team then used the P02.2 beamline at PETRA III to determine that the ice XXI has a body-centred tetragonal crystal structure with a large unit cell (a = b = 20.197 Å and c = 7.891 Å) at approximately 1.6 GPa. This unit cell contains 152 water molecules, resulting in a density of 1.413 g cm−3.

    The experiments were far from easy, recalls Lee. Upon crystallization, Ice XXI grows upwards (that is, in the vertical direction), which makes it difficult to precisely analyse its crystal structure. “The difficulty for us is to keep it stable for a long enough period to make precise structural measurements in single crystal diffraction study,” he says.

    The multiple pathways of ice crystallization unearthed in this work, which is detailed in Nature Materials, imply that many more ice phases may exist. Lee says it is therefore important to analyse the mechanism behind the formation of these phases. “This could, for example, help us better understand the formation and evolution of these phases on icy moons or planets,” he tells Physics World.

    The post Ice XXI appears in a diamond anvil cell appeared first on Physics World.

    https://physicsworld.com/a/ice-xxi-appears-in-a-diamond-anvil-cell/
    Isabelle Dumé

    Studying the role of the quantum environment in attosecond science

    Researchers have developed a new way to model dephasing in attosecond experiments

    The post Studying the role of the quantum environment in attosecond science appeared first on Physics World.

    Attosecond science is undoubtedly one of the fastest growing branches of physics today.

    Its popularity was demonstrated by the award of the 2023 Nobel Prize in Physics to Anne L’Huillier, Paul Corkum and Ferenc Krausz for experimental methods that generate attosecond pulses of light for the study of electron dynamics in matter.

    One of the most important processes in this field is dephasing. This happens when an electron loses its phase coherence because of interactions with its surroundings.

    This loss of coherence can obscure the fine details of electron dynamics, making it harder to capture precise snapshots of these rapid processes.

    The most common way to model this process in light-matter interactions is by using the relaxation time approximation. This approach greatly simplifies the picture as it avoids the need to model every single particle in the system.

    Its use is fine for dilute gases, but it doesn’t work as well with intense lasers and denser materials, such as solids, because it greatly overestimates ionisation.

    This is a significant problem as ionisation is the first step in many processes such as electron acceleration and high-harmonic generation.

    To address this problem, a team led by researchers from the University of Ottawa have developed a new method to correct for this problem.

    By introducing a heat bath into the model they were able to represent the many-body environment that interacts with electrons, without significantly increasing the complexity.

    This new approach should enable the identification of new effects in attosecond science or wherever strong electromagnetic fields interact with matter.

    Read the full article

    Strong field physics in open quantum systems – IOPscience

    N. Boroumand et al, 2025 Rep. Prog. Phys. 88 070501

     

    The post Studying the role of the quantum environment in attosecond science appeared first on Physics World.

    https://physicsworld.com/a/studying-the-role-of-the-quantum-environment-in-attosecond-science/
    Paul Mabey

    Characterising quantum many-body states

    A team of researchers have developed a new method for characterising quantum properties of large systems using graph theory

    The post Characterising quantum many-body states appeared first on Physics World.

    Describing the non-classical properties of a complex many-body system (such as entanglement or coherence) is an important part of quantum technologies.

    An ideal tool for this task would work well with large systems, be easily computable and easily measurable. Unfortunately, such a tool for every situation does not yet exist.

    With this goal in mind a team of researchers – Marcin Płodzień and Maciej Lewenstein (ICFO, Barcelona, Spain) and Jan Chwedeńczuk (University of Warsaw, Poland) – began work on a special type of quantum state used in quantum computing – graph states.

    These states can be visualised as graphs or networks where each vertex represents a qubit, and each edge represents an interaction between pairs of qubits.

    The team studied four different shapes of graph states using new mathematical tools they developed. They found that one of these in particular, the Turán graph, could be very useful in quantum metrology.

    Their method is (relatively) straightforward and does not require many assumptions. This means that it could be applied to any shape of graph beyond the four studied here.

    The results will be useful in various quantum technologies wherever precise knowledge of many-body quantum correlations is necessary.

    Read the full article

    Many-body quantum resources of graph states – IOPscience

    M. Płodzień et al, 2025 Rep. Prog. Phys. 88 077601

     

    The post Characterising quantum many-body states appeared first on Physics World.

    https://physicsworld.com/a/characterising-quantum-many-body-states/
    Paul Mabey

    Extra carbon in the atmosphere may disrupt radio communications

    Increasing CO2 levels are triggering changes in the ionosphere that will adversely affect signals, say scientists

    The post Extra carbon in the atmosphere may disrupt radio communications appeared first on Physics World.

    Higher levels of carbon dioxide (CO2) in the Earth’s atmosphere could harm radio communications by enhancing a disruptive effect in the ionosphere. According to researchers at Kyushu University, Japan, who modelled the effect numerically for the first time, this little-known consequence of climate change could have significant impacts on shortwave radio systems such as those employed in broadcasting, air traffic control and navigation.

    “While increasing CO2 levels in the atmosphere warm the Earth’s surface, they actually cool the ionosphere,” explains study leader Huixin Liu of Kyushu’s Faculty of Science. “This cooling doesn’t mean it is all good: it decreases the air density in the ionosphere and accelerates wind circulation. These changes affect the orbits and lifespan of satellites and space debris and also disrupt radio communications through localized small-scale plasma irregularities.”

    The sporadic E-layer

    One such irregularity is a dense but transient layer of metal ions that forms between 90‒120 km above the Earth’s surface. This sporadic E-layer (Es), as it is known, is roughly 1‒5 km thick and can stretch from tens to hundreds of kilometres in the horizontal direction. Its density is highest during the day, and it peaks around the time of the summer solstice.

    The formation of the Es is hard to predict, and the mechanisms behind it are not fully understood. However, the prevailing “wind shear” theory suggests that vertical shears in horizontal winds, combined with the Earth’s magnetic field, cause metallic ions such as Fe+, Na+ and Ca+ to converge in the ionospheric dynamo region and form thin layers of enhanced ionization. The ions themselves largely come from metals in meteoroids that enter the Earth’s atmosphere and disintegrate at altitudes of around 80‒100 km.

    Effects of increasing CO2 concentrations

    While previous research has shown that increases in CO2 trigger atmospheric changes on a global scale, relatively little is known about how these increases affect smaller-scale ionospheric phenomena like the Es. In the new work, which is published in Geophysical Research Letters, Liu and colleagues used a whole-atmosphere model to simulate the upper atmosphere at two different CO2 concentrations: 315 ppm and 667 ppm.

    “The 315 ppm represents the CO2 concentration in 1958, the year in which recordings started at the Mauna Loa observatory, Hawaii,” Liu explains. “The 667 ppm represents the projected CO2 concentration for the year 2100, based on a conservative assumption that the increase in CO2 is constant at a rate of around 2.5 ppm/year since 1958.”

    The researchers then evaluated how these different CO2 levels influence a phenomenon known as vertical ion convergence (VIC) which, according to the wind shear theory, drives the Es. The simulations revealed that the higher the atmospheric CO2 levels, the greater the VIC at altitudes of 100–120 km. “What is more, this increase is accompanied by the VIC hotspots shifting downwards by approximately 5 km,” says Liu. “The VIC patterns also change dramatically during the day and these diurnal variability patterns continue into the night.”

    According to the researchers, the physical mechanism underlying these changes depends on two factors. The first is reduced collisions between metallic ions and the neutral atmosphere as a direct result of cooling in the ionosphere. The second is changes in the zonal wind shear, which are likely caused by long-term trends in atmosphere tides.

    “These results are exciting because they show that the impacts of CO2 increase can extend all the way from Earth’s surface to altitudes at which HF and VHF radio waves propagate and communications satellites orbit,” Liu tells Physics World. “This may be good news for ham radio amateurs, as you will likely receive more signals from faraway countries more often. For radio communications, however, especially at HF and VHF frequencies employed for aviation, ships and rescue operations, it means more noise and frequent disruption in communication and hence safety. The telecommunications industry might therefore need to adjust their frequencies or facility design in the future.”

    The post Extra carbon in the atmosphere may disrupt radio communications appeared first on Physics World.

    https://physicsworld.com/a/extra-carbon-in-the-atmosphere-may-disrupt-radio-communications/
    Isabelle Dumé

    Phase-changing material generates vivid tunable colours

    A multilayer stack containing a thin film of temperature-sensitive vanadium dioxide creates tunable structural colours on rigid and flexible surfaces

    The post Phase-changing material generates vivid tunable colours appeared first on Physics World.

    A toy gecko featuring a flexible layer of the thermally tunable colour coating
    Switchable camouflage A toy gecko featuring a flexible layer of the thermally tunable colour coating appears greenish blue at room temperature (left); upon heating (right), its body changes to a dark magenta colour. (Courtesy: Aritra Biswa)

    Structural colours – created using nanostructures that scatter and reflect specific wavelengths of light – offer a non-toxic, fade-resistant and environmentally friendly alternative to chemical dyes. Large-scale production of structural colour-based materials, however, has been hindered by fabrication challenges and a lack of effective tuning mechanisms.

    In a step towards commercial viability, a team at the University of Central Florida has used vanadium dioxide (VO2) – a material with temperature-sensitive optical and structural properties – to generate tunable structural colour on both rigid and flexible surfaces, without requiring complex nanofabrication.

    Senior author Debashis Chanda and colleagues created their structural colour platform by stacking a thin layer of VO2 on top of a thick, reflective layer of aluminium to form a tunable thin-film cavity. At specific combinations of VO2 grain size and layer thickness this structure strongly absorbs certain frequency bands of visible light, producing the appearance of vivid colours.

    The key enabler of this approach is the fact that at a critical transition temperature, VO2 reversibly switches from insulator to metal, accompanied by a change in its crystalline structure. This phase change alters the interference conditions in the thin-film cavity, varying the reflectance spectra and changing the perceived colour. Controlling the thickness of the VO2 layer enables the generation of a wide range of structural colours.

    The bilayer structures are grown via a combination of magnetron sputtering and electron-beam deposition, techniques compatible with large-scale production. By adjusting the growth parameters during fabrication, the researchers could broaden the colour palette and control the temperature at which the phase transition occurs. To expand the available colour range further, they added a third ultrathin layer of high-refractive index titanium dioxide on top of the bilayer.

    The researchers describe a range of applications for their flexible coloration platform, including a colour-tunable maple leaf pattern, a thermal sensing label on a coffee cup and tunable structural coloration on flexible fabrics. They also demonstrated its use on complex shapes, such as a toy gecko with a flexible tunable colour coating and an embedded heater.

    “These preliminary demonstrations validate the feasibility of developing thermally responsive sensors, reconfigurable displays and dynamic colouration devices, paving the way for innovative solutions across fields such as wearable electronic, cosmetics, smart textiles and defence technologies,” the team concludes.

    The research is described in Proceedings of the National Academy of Sciences.

    The post Phase-changing material generates vivid tunable colours appeared first on Physics World.

    https://physicsworld.com/a/phase-changing-material-generates-vivid-tunable-colours/
    Tami Freeman

    Semiconductor laser pioneer Susumu Noda wins 2026 Rank Prize for Optoelectronics

    Noda made breakthroughs in the development of the Photonic Crystal Surface Emitting Laser

    The post Semiconductor laser pioneer Susumu Noda wins 2026 Rank Prize for Optoelectronics appeared first on Physics World.

    Susumu Noda of Kyoto University has won the 2026 Rank Prize for Optoelectronics for the development of the Photonic Crystal Surface Emitting Laser (PCSEL). For more than 25 years, Noda developed this new form of laser, which has potential applications in high-precision manufacturing as well as in LIDAR technologies.

    Following the development of the laser in 1960, in more recent decades optical fibre lasers and semiconductor lasers have become competing technologies.

    A semiconductor laser works by pumping an electrical current into a region where an n-doped (excess of electrons) and a p-doped (excess of “holes”) semiconductor material meet, causing electrons and holes to combine and release photons.

    Semiconductors have several advantages in terms of their compactness, high “wallplug” efficiency, and ruggedness, but lack in other areas such as having a low brightness and functionality.

    This means that conventional semiconductor lasers required external optical and mechanical elements to improve their performance, which results in large and impractical systems.

    ‘A great honour’

    In the late 1990s, Noda began working on a new type of semiconductor laser that could challenge the performance of optical fibre lasers. These so-called PCSELs employ a photonic crystal layer  in between the semiconductor layers. Photonic crystals are nanostructured materials in which a periodic variation of the dielectric constant — formed, for example, by a lattice of holes — creates a photonic band-gap.

    Noda and his research made a series of breakthrough in the technology such as demonstrating control of polarization and beam shape by tailoring the phonic crystal structure and expansion into blue–violet wavelengths.

    The resulting PCSELs emit a high-quality, symmetric beam with narrow divergence and boast high brightness and high functionality while maintaining the benefits of conventional semiconductor lasers. In 2013, 0.2 W PCSELs became available and a few years later Watt-class PCSEL lasers became operational.

    Noda says that it is “a great honour and a surprise” to receive the prize. “I am extremely happy to know that more than 25 years of research on photonic-crystal surface-emitting lasers has been recognized in this way,” he adds. “I do hope to continue to further develop the research and its social implementation.”

    Susumu Noda received his BSc and then PhD in electronics from Kyoto University in 1982 and 1991, respectively. From 1984 he also worked at Mitsubishi Electric Corporation, before joining Kyoto University in 1988 where he is currently based.

    Founded in 1972 by the British industrialist and philanthropist Lord J Arthur Rank, the Rank Prize is awarded biennially in nutrition and optoelectronics. The 2026 Rank Prize for Optoelectronics, which has a cash award of £100 000, will be awarded formally at an event held in June.

    The post Semiconductor laser pioneer Susumu Noda wins 2026 Rank Prize for Optoelectronics appeared first on Physics World.

    https://physicsworld.com/a/semiconductor-laser-pioneer-susumu-noda-wins-2026-rank-prize-for-optoelectronics/
    Michael Banks

    Staying the course with lockdowns could end future pandemics in months

    New calculation of viral spread suggests that rapid elimination of SARS-CoV-2-like viruses is scientifically feasible, though social challenges remain

    The post Staying the course with lockdowns could end future pandemics in months appeared first on Physics World.

    As a theoretical and mathematical physicist at Imperial College London, UK, Bhavin Khatri spent years using statistical physics to understand how organisms evolve. Then the COVID-19 pandemic struck, and like many other scientists, he began searching for ways to apply his skills to the crisis. This led him to realize that the equations he was using to study evolution could be repurposed to model the spread of the virus – and, crucially, to understand how it could be curtailed.

    In a paper published in EPL, Khatri models the spread of a SARS-CoV-2-like virus using branching process theory, which he’d previously used to study how advantageous alleles (variations in a genetic sequence) become more prevalent in a population. He then uses this model to assess the duration that interventions such as lockdowns would need to be applied in order to completely eliminate infections, with the strength of the intervention measured in terms of the number of people each infected person goes on to infect (the virus’ effective reproduction number, R).

    Tantalizingly, the paper concludes that applying such interventions worldwide in June 2020 could have eliminated the COVID virus by January 2021, several months before the widespread availability of vaccines reduced its impact on healthcare systems and led governments to lift restrictions on social contact. Physics World spoke to Khatri to learn more about his research and its implications for future pandemics.

    What are the most important findings in your work?

    One important finding is that we can accurately calculate the distribution of times required for a virus to become extinct by making a relatively simple approximation. This approximation amounts to assuming that people have relatively little population-level “herd” immunity to the virus – exactly the situation that many countries, including the UK, faced in March 2020.

    Making this approximation meant I could reduce the three coupled differential equations of the well-known SIR model (which models pandemics via the interplay between Susceptible, Infected and Recovered individuals) to a single differential equation for the number of infected individuals in the population. This single equation turned out to be the same one that physics students learn when studying radioactive decay. I then used the discrete stochastic version of exponential decay and standard approaches in branching process theory to calculate the distribution of extinction times.

    Simulation trajectories a) A plot of the decline in the number of infected individuals over time. b) Probability density of extinction times for the same parameters as in a), showing that the most likely extinction times are measured in months. (Courtesy: Bhavin S. Khatri 2025 EPL 152 11003 DOI 10.1209/0295-5075/ae0c31 CC-BY 4.0 https://creativecommons.org/licenses/by/4.0/)

    Alongside the formal theory, I also used my experience in population genetic theory to develop an intuitive approach for calculating the mean of this extinction time distribution. In population genetics, when a mutation is sufficiently rare, changes in its number of copies in the population are dominated by randomness. This is true even if the mutation has a large selective advantage: it has to grow by chance to sufficient critical size – on the order of 1/(selection strength) – for selection to take hold.

    The same logic works in reverse when applied to a declining number of infections. Initially, they will decline deterministically, but once they go below a threshold number of individuals, changes in infection numbers become random. Using the properties of such random walks, I calculated an expression for the threshold number and the mean duration of the stochastic phase. These agree well with the formal branching process calculation.

    In practical terms, the main result of this theoretical work is to show that for sufficiently strong lockdowns (where, on average, only one of every two infected individuals goes on to infect another person, R=0.5), this distribution of extinction times was narrow enough to ensure that the COVID pandemic virus would have gone extinct in a matter of months, or at most a year.

    How realistic is this counterfactual scenario of eliminating SARS-CoV-2 within a year?

    Leaving politics and the likelihood of social acceptance aside for the moment, if a sufficiently strong lockdown could have been maintained for a period of roughly six months across the globe, then I am confident that the virus could have been reduced to very low levels, or even made extinct.

    The question then is: is this a stable situation? From the perspective of a single nation, if the rest of the world still has infections, then that nation either needs to maintain its lockdown or be prepared to re-impose it if there are new imported cases. From a global perspective, a COVID-free world should be a stable state, unless an animal reservoir of infections causes re-infections in humans.

    Photo of Bhavin Khatri. He has a salt-and-pepper beard and glasses, he's wearing a button-down shirt with fine red checks that's open at the collar, and he's sitting in front of a window in an office
    Modelling the decline of a virus: Theoretical physicist and biologist Bhavin Khatri. (Courtesy: Bhavin Khatri)

    As for the practical success of such a strategy, that depends on politics and the willingness of individuals to remain in lockdown. Clearly, this is not in the model. One thing I do discuss, though, is that this strategy becomes far more difficult once more infectious variants of SARS-CoV-2 evolve. However, the problem I was working on before this one (which I eventually published in PNAS) concerned the probability of evolutionary rescue or resistance, and that work suggests that evolution of new COVID variants reduces significantly when there are fewer infections. So an elimination strategy should also be more robust against the evolution of new variants.

    What lessons would you like experts (and the public) to take from this work when considering future pandemic scenarios?

    I’d like them to conclude that pandemics with similar properties are, in principle, controllable to small levels of infection – or complete extinction – on timescales of months, not years, and that controlling them minimizes the chance of new variants evolving. So, although the question of the political and social will to enact such an elimination strategy is not in the scope of the paper, I think if epidemiologists, policy experts, politicians and the public understood that lockdowns have a finite time horizon, then it is more likely that this strategy could be adopted in the future.

    I should also say that my work makes no comment on the social harms of lockdowns, which shouldn’t be minimized and would need to be weighed against the potential benefits.

    What do you plan to do next?

    I think the most interesting next avenue will be to develop theory that lets us better understand the stability of the extinct state at the national and global level, under various assumptions about declining infections in other countries that adopted different strategies and the role of an animal reservoir.

    It would also be interesting to explore the role of “superspreaders”, or infected individuals who infect many other people. There’s evidence that many infections spread primarily through relatively few superspreaders, and heuristic arguments suggest that taking this into account would decrease the time to extinction compared to the estimates in this paper.

    I’ve also had a long-term interest in understanding the evolution of viruses from the lens of what are known as genotype phenotype maps, where we consider the non-trivial and often redundant mapping from genetic sequences to function, where the role of stochasticity in evolution can be described by statistical physics analogies. For the evolution of the antibodies that help us avoid virus antigens, this would be a driven system, and theories of non-equilibrium statistical physics could play a role in answering questions about the evolution of new variants.

    The post Staying the course with lockdowns could end future pandemics in months appeared first on Physics World.

    https://physicsworld.com/a/staying-the-course-with-lockdowns-could-end-future-pandemics-in-months/
    No Author

    When is good enough ‘good enough’?

    Honor Powrie extols the virtues being just “good enough” in life

    The post When is good enough ‘good enough’? appeared first on Physics World.

    Whether you’re running a business project, carrying out scientific research, or doing a spot of DIY around the house, knowing when something is “good enough” can be a tough question to answer. To me, “good enough” means something that is fit for purpose. It’s about striking a balance between the effort required to achieve perfection and the cost of not moving forward. It’s an essential mindset when perfection is either not needed or – as is often the case – not attainable.

    When striving for good enough, the important thing to focus on is that your outcome should meet expectations, but not massively exceed them. Sounds simple, but how often have we heard people say things like they’re “polishing coal”, striving for “gold plated” or “trying to make a silk purse out of a sow’s ear”. It basically means they haven’t understood, defined or even accepted the requirements of the end goal.

    Trouble is, as we go through school, college and university, we’re brought up to believe that we should strive for the best in whatever we study. Those with the highest grades, we’re told, will probably get the best opportunities and career openings. Unfortunately, this approach means we think we need to aim for perfection in everything in life, which is not always a good thing.

    How to be good enough

    So why is aiming for “good enough” a good thing to do? First, there’s the notion of “diminishing returns”. It takes a disproportionate amount of effort to achieve the final, small improvements that most people won’t even notice. Put simply, time can be wasted on unnecessary refinements, as embodied by the 80/20 rule (see box).

    The 80/20 rule: the guiding principle of “good enough”

    Also known as the Pareto principle – in honour of the Italian economist Vilfredo Pareto who first came up with the idea – the 80/20 rule states that for many outcomes, 80% of consequences or results come from 20% of the causes or effort. The principle helps to identify where to prioritize activities to boost productivity and get better results. It is a guideline, and the ratios can vary, but it can be applied to many things in both our professional and personal lives.

    Examples from the world of business include the following:

    Business sales: 80% of a company’s revenue might come from 20% of its customers.

    Company productivity: 80% of your results may come from 20% of your daily tasks.

    Software development: 80% of bugs could be caused by 20% of the code.

    Quality control: 20% of defects may cause 80% of customer complaints.

    Good enough also helps us to focus efforts. When a consumer or customer doesn’t know exactly what they want, or a product development route is uncertain, it can be better to deliver things in small chunks. Providing something basic but usable can be used to solicit feedback to help clarify requirements or make improvements or additions that can be incorporated into the next chunk. This is broadly along the lines of a “minimum viable product”.

    Not seeking perfection reminds us too that solutions to problems are often uncertain. If it’s not clear how, or even if, something might work, a proof of concept (PoC) can instead be a good way to try something out. Progress can be made by solving a specific technical challenge, whether via a basic experiment, demonstration or short piece of research. A PoC should help avoid committing significant time and resource to something that will never work.

    Aiming for “good enough” naturally leads us to the notion of “continuous improvement”. It’s a personal favourite of mine because it allows for things to be improved incrementally as we learn or get feedback, rather than producing something in one go and then forgetting about it. It helps keep things current and relevant and encourages a culture of constantly looking for a better way to do things.

    Finally, when searching for good enough, don’t forget the idea of ballpark estimates. Making approximations sounds too simple to be effective, but sometimes a rough estimate is really all you need. If an approximate guess can inform and guide your next steps or determine whether further action will be necessary then go for it. 

    The benefits of good enough

    Being good enough doesn’t just lead to practical outcomes, it can benefit our personal well-being too. Our time, after all, is a precious commodity and we can’t magically increase this resource. The pursuit of perfection can lead to stagnation, and ultimately burnout, whereas achieving good enough allows us to move on in a timely fashion.

    A good-enough approach will even make you less stressed. By getting things done sooner and achieving more, you’ll feel freer and happier about your work even if it means accepting imperfection. Mistakes and errors are inevitable in life, so don’t be afraid to make them; use them as learning opportunities, rather than seeing them as something bad. Remember – the person who never made a mistake never got out of bed.

    Recognizing that you’ve done the best you can for now is also crucial for starting new projects and making progress. By accepting good enough you can build momentum, get more things done, and consistently take actions toward achieving your goals.

    Finally, good enough is also about shared ownership. By inviting someone else to look at what you’ve done, you can significantly speed up the process. In my own career I’ve often found myself agonising over some obscure detail or feeling something is missing, only to have my quandary solved almost instantly simply by getting someone else involved – making me wish I’d asked them sooner.

    Caveats and conclusions

    Good enough comes with some caveats. Regulatory or legislative requirements means there will always be projects that have to reach a minimum standard, which will be your top priority. The precise nature of good enough will also depend on whether you’re making stuff (be it cars or computers) or dealing with intangible commodities such as software or services.

    So what’s the conclusion? Well, in the interests of my own time, I’ve decided to apply the 80/20 rule and leave it to you to draw your own conclusion. As far as I’m concerned, I think this article has been good enough, but I’m sure you’ll let me know if it hasn’t. Consider it as a minimally viable product that I can update in a future column.

    The post When is good enough ‘good enough’? appeared first on Physics World.

    https://physicsworld.com/a/when-is-good-enough-good-enough/
    Honor Powrie

    Looking for inconsistencies in the fine structure constant

    High-precision laser spectroscopy measurements on the thorium-229 nucleus could reveal new physics, say TU Wien physicists

    The post Looking for inconsistencies in the fine structure constant appeared first on Physics World.

    a crystal containing thorium atoms
    The core element of the experiment: a crystal containing thorium atoms. (Courtesy: TU Wien)

    New high-precision laser spectroscopy measurements on thorium-229 nuclei could shed more light on the fine structure constant, which determines the strength of the electromagnetic interaction, say physicists at TU Wien in Austria.

    The electromagnetic interaction is one of the four known fundamental forces in nature, with the others being gravity and the strong and weak nuclear forces. Each of these fundamental forces has an interaction constant that describes its strength in comparison with the others. The fine structure constant, α, has a value of approximately 1/137. If it had any other value, charged particles would behave differently, chemical bonding would manifest in another way and light-matter interactions as we know them would not be the same.

    “As the name ‘constant’ implies, we assume that these forces are universal and have the same values at all times and everywhere in the universe,” explains study leader Thorsten Schumm from the Institute of Atomic and Subatomic Physics at TU Wien. “However, many modern theories, especially those concerning the nature of dark matter, predict small and slow fluctuations in these constants. Demonstrating a non-constant fine-structure constant would shatter our current understanding of nature, but to do this, we need to be able to measure changes in this constant with extreme precision.”

    With thorium spectroscopy, he says, we now have a very sensitive tool to search for such variations.

    Nucleus becomes slightly more elliptic

    The new work builds on a project that led, last year, to the worlds’s first nuclear clock, and is based on precisely determining how the thorium-229 (229Th) nucleus changes shape when one of its neutrons transitions from a ground state to a higher-energy state. “When excited, the 229Th nucleus becomes slightly more elliptic,” Schumm explains. “Although this shape change is small (at the 2% level), it dramatically shifts the contributions of the Coulomb interactions (the repulsion between protons in the nucleus) to the nuclear quantum states.”

    The result is a change in the geometry of the 229Th nucleus’ electric field, to a degree that depends very sensitively on the value of the fine structure constant. By precisely observing this thorium transition, it is therefore possible to measure whether the fine-structure constant is actually a constant or whether it varies slightly.

    After making crystals of 229Th doped in a CaF2 matrix at TU Wien, the researchers performed the next phase of the experiment in a JILA laboratory at the University of Colorado, Boulder, US, firing ultrashort laser pulses at the crystals. While they did not measure any changes in the fine structure constant, they did succeed in determining how such changes, if they exist, would translate into modifications to the energy of the first nuclear excited state of 229Th.

    “It turns out that this change is huge, a factor 6000 larger than in any atomic or molecular system, thanks to the high energy governing the processes inside nuclei,” Schumm says. “This means that we are by a factor of 6000 more sensitive to fine structure variations than previous measurements.”

    Increasing the spectroscopic accuracy of the 229Th transition

    Researchers in the field have debated the likelihood of such an “enhancement factor” for decades, and theoretical predictions of its value have varied between zero and 10 000. “Having confirmed such a high enhancement factor will now allow us to trigger a ‘hunt’ for the observation of fine structure variations using our approach,” Schumm says.

    Andrea Caputo of CERN’s theoretical physics department, who was not involved in this work, calls the experimental result “truly remarkable”, as it probes nuclear structure with a precision that has never been achieved before. However, he adds that the theoretical framework is still lacking. “In a recent work published shortly before this work, my collaborators and I showed that the nuclear-clock enhancement factor K is still subject to substantial theoretical uncertainties,” Caputo says. “Much progress is therefore still required on the theory side to model the nuclear structure reliably.”

    Schumm and colleagues are now working on increasing the spectroscopic accuracy of their 229Th transition measurement by another one to two orders of magnitude. “We will then start hunting for fluctuations in the transition energy,” he reveals, “tracing it over time and – through the Earth’s movement around the Sun – space.

    The present work is detailed in Nature Communications.

    The post Looking for inconsistencies in the fine structure constant appeared first on Physics World.

    https://physicsworld.com/a/looking-for-inconsistencies-in-the-fine-structure-constant/
    Isabelle Dumé

    Heat engine captures energy as Earth cools at night

    System can generate electricity when solar cells cannot

    The post Heat engine captures energy as Earth cools at night appeared first on Physics World.

    A new heat engine driven by the temperature difference between Earth’s surface and outer space has been developed by Tristan Deppe and Jeremy Munday at the University of California Davis. In an outdoor trial, the duo showed how their engine could offer a reliable source of renewable energy at night.

    While solar cells do a great job of converting the Sun’s energy into electricity, they have one major drawback, as Munday explains: “Lack of power generation at night means that we either need storage, which is expensive, or other forms of energy, which often come from fossil fuel sources.”

    One solution is to exploit the fact that the Earth’s surface absorbs heat from the Sun during the day and then radiates some of that energy into space at night. While space has a temperature of around −270° C, the average temperature of Earth’s surface is a balmy 15° C. Together, these two heat reservoirs provide the essential ingredients of a heat engine, which is a device that extracts mechanical work as thermal energy flows from a heat source to a heat sink.

    Coupling to space

    “At first glance, these two entities appear too far apart to be connected through an engine. However, by radiatively coupling one side of the engine to space, we can achieve the needed temperature difference to drive the engine,” Munday explains.

    For the concept to work, the engine must radiate the energy it extracts from the Earth within the atmospheric transparency window. This is a narrow band of infrared wavelengths that pass directly into outer space without being absorbed by the atmosphere.

    To demonstrate this concept, Deppe and Munday created a Stirling engine, which operates through the cyclical expansion and contraction of an enclosed gas as it moves between hot and cold ends. In their setup, the ends were aligned vertically, with a pair of plates connecting each end to the corresponding heat reservoir.

    For the hot end, an aluminium mount was pressed into soil, transferring the Earth’s ambient heat to the engine’s bottom plate. At the cold end, the researchers attached a black-coated plate that emitted an upward stream of infrared radiation within the transparency window.

    Outdoor experiments

    In a series of outdoor experiments performed throughout the year, this setup maintained a temperature difference greater than 10° C between the two plates during most months. This was enough to extract more than 400 mW per square metre of mechanical power throughout the night.

    “We were able to generate enough power to run a mechanical fan, which could be used for air circulation in greenhouses or residential buildings,” Munday describes. “We also configured the device to produce both mechanical and electrical power simultaneously, which adds to the flexibility of its operation.”

    With this promising early demonstration, the researchers now predict that future improvements could enable the system to extract as much as 6 W per square metre under the same conditions. If rolled out commercially, the heat engine could help reduce the reliance of solar power on night-time energy storage – potentially opening a new route to cutting carbon emissions.

    The research has described in Science Advances.

    The post Heat engine captures energy as Earth cools at night appeared first on Physics World.

    https://physicsworld.com/a/heat-engine-captures-energy-as-earth-cools-at-night/
    No Author

    Microscale ‘wave-on-a-chip’ device sheds light on nonlinear hydrodynamics

    New device could help us better understand phenomena from ocean waves and hurricanes to weather and climate

    The post Microscale ‘wave-on-a-chip’ device sheds light on nonlinear hydrodynamics appeared first on Physics World.

    A new microscale version of the flumes that are commonly used to reproduce wave behaviour in the laboratory will make it far easier to study nonlinear hydrodynamics. The device consists of a layer of superfluid helium just a few atoms thick on a silicon chip, and its developers at the University of Queensland, Australia, say it could help us better understand phenomena ranging from oceans and hurricanes to weather and climate.

    “The physics of nonlinear hydrodynamics is extremely hard to model because of instabilities that ultimately grow into turbulence,” explains study leader Warwick Bowen of Queensland’s Quantum Optics Laboratory. “It is also very hard to study in experiments since these often require hundreds-of-metre-long wave flumes.”

    While such flumes are good for studying shallow-water dynamics like tsunamis and rogue waves, Bowen notes that they struggle to access many of the complex wave behaviours, such as turbulence, found in nature.

    Amplifying the nonlinearities in complex behaviours

    The team say that the geometrical structure of the new wave-on-a-chip device can be designed at will using lithographic techniques and built in a matter of days. Superfluid helium placed on its surface can then be controlled optomechanically. Thanks to these innovations, the researchers were able to experimentally measure nonlinear hydrodynamics millions of times faster than would be possible using traditional flumes. They could also “amplify” the nonlinearities of complex behaviours, making them orders of magnitude stronger than is possible in even the largest wave flumes.

    “This promises to change the way we do nonlinear hydrodynamics, with the potential to discover new equations that better explain the complex physics behind it,” Bowen says. “Such a technique could be used widely to improve our ability to predict both natural and engineered hydrodynamic behaviours.”

    So far, the team has measured several effects, including wave steepening, shock fronts and solitary wave fission thanks to the chip. While these nonlinear behaviours had been predicted in superfluids, they had never been directly observed there until now.

    Waves can be generated in a very shallow depth

    The Quantum Optics Laboratory researchers have been studying superfluid helium for over a decade. A key feature of this quantum liquid is that it flows without resistance, similar to the way electrons move without resistance in a superconductor. “We realized that this behaviour could be exploited in experimental studies of nonlinear hydrodynamics because it allows waves to be generated in a very shallow depth – even down to just a few atoms deep,” Bowen explains.

    In conventional fluids, Bowen continues, resistance to motion becomes hugely important at small scales, and ultimately limits the nonlinear strengths accessible in traditional flume-based testing rigs. “Moving from the tens-of-centimetre depths of these flumes to tens-of-nanometres, we realized that superfluid helium could allow us to achieve many orders of magnitude stronger nonlinearities – comparable to the largest flows in the ocean – while also greatly increasing measurement speeds. It was this potential that attracted us to the project.”

    The experiments were far from simple, however. To do them, the researchers needed to cryogenically cool the system to near absolute zero temperatures. They also needed to fabricate exceptionally thin superfluid helium films that interact very weakly with light, as well as optical devices with structures smaller than a micron. Combining all these components required what Bowen describes as “something of a hero experiment”, with important contributions coming from the team’s co-leader, Christopher Baker, and Walter Wasserman, who was then a PhD student in the group. The wave dynamics themselves, Bowen adds, were “exceptionally complex” and were analysed by Matthew Reeves, the first author of a Science paper describing the device.

    As well as the applications areas mentioned earlier, the team say the new work, which is supported by the US Defense Advanced Research Project Agency’s APAQuS Program, could also advance our understanding of strongly-interacting quantum structures that are difficult to model theoretically. “Superfluid helium is a classic example of such a system,” explains Bowen, “and our measurements represent the most precise measurements of wave physics in these. Other applications may be found in quantum technologies, where the flow of superfluid helium could – somewhat speculatively – replace superconducting electron flow in future quantum computing architectures.”

    The researchers now plan to use the device and machine learning techniques to search for new hydrodynamics equations.

    The post Microscale ‘wave-on-a-chip’ device sheds light on nonlinear hydrodynamics appeared first on Physics World.

    https://physicsworld.com/a/microscale-wave-on-a-chip-device-sheds-light-on-nonlinear-hydrodynamics/
    Isabelle Dumé

    Electrical charge on objects in optical tweezers can be controlled precisely

    New technique could shed light on electrification of aerosols

    The post Electrical charge on objects in optical tweezers can be controlled precisely appeared first on Physics World.

    An effect first observed decades ago by Nobel laureate Arthur Ashkin has been used to fine tune the electrical charge on objects held in optical tweezers. Developed by an international team led by Scott Waitukaitis of the Institute of Science and Technology Austria, the new technique could improve our understanding of aerosols and clouds.

    Optical tweezers use focused laser beams to trap and manipulate small objects about 100 nm to 1 micron in size. Their precision and versatility have made them a staple across fields from quantum optics to biochemistry.

    Ashkin shared the 2018 Nobel prize for inventing optical tweezers and in the 1970s he noticed that trapped objects can be electrically charged by the laser light. “However, his paper didn’t get much attention, and the observation has essentially gone ignored,” explains Waitukaitis.

    Waitukaitis’ team rediscovered the effect while using optical tweezers to study how charges build up in the ice crystals accumulating inside clouds. In their experiment, micron-sized silica spheres stood in for the ice, but Ashkin’s charging effect got in their way.

    Bummed out

    “Our goal has always been to study charged particles in air in the context of atmospheric physics – in lightning initiation or aerosols, for example,” Waitukaitis recalls. “We never intended for the laser to charge the particle, and at first we were a bit bummed out that it did so.”

    Their next thought was that they had discovered a new and potentially useful phenomenon. “Out of due diligence we of course did a deep dive into the literature to be sure that no one had seen it, and that’s when we found the old paper from Ashkin, “ says Waitukaitis.

    In 1976, Ashkin described how optically trapped objects become charged through a nonlinear process whereby electrons absorb two photons simultaneously. These electrons can acquire enough energy to escape the object, leaving it with a positive charge.

    Yet beyond this insight, Ashkin “wasn’t able to make much sense of the effect,” Waitukaitis explains. “I have the feeling he found it an interesting curiosity and then moved on.”

    Shaking and scattering

    To study the effect in more detail, the team modified their optical tweezers setup so its two copper lens holders doubled as electrodes, allowing them to apply an electric field along the axis of the confining, opposite-facing laser beams. If the silica sphere became charged, this field would cause it to shake, scattering a portion of the laser light back towards each lens.

    The researchers picked off this portion of the scattered light using a beam splitter, then diverted it to a photodiode, allowing them to track the sphere’s position. Finally, they converted the measured amplitude of the shaking particle into a real-time charge measurement. This allowed them to track the relationship between the sphere’s charge and the laser’s tuneable intensity.

    Their measurements confirmed Ashkin’s 1976 hypothesis that electrons on optically-trapped objects undergo two-photon absorption, allowing them to escape. Waitukaitis and colleagues improved on this model and showed how the charge on a trapped object can be controlled precisely by simply adjusting the laser’s intensity.

    As for the team’s original research goal, the effect has actually been very useful for studying the behaviour of charged aerosols.

    “We can get [an object] so charged that it shoots off little ‘microdischarges’ from its surface due to breakdown of the air around it, involving just a few or tens of electron charges at a time,” Waitukaitis says. “This is going to be really cool for studying electrostatic phenomena in the context of particles in the atmosphere.“

    The study is described in Physical Review Letters.

    The post Electrical charge on objects in optical tweezers can be controlled precisely appeared first on Physics World.

    https://physicsworld.com/a/electrical-charge-on-objects-in-optical-tweezers-can-be-controlled-precisely/
    No Author

    Quantum gravity: we explore spin foams and other potential solutions to this enduring challenge

    Bianca Dittrich of the Perimeter Institute is our podcast guest

    The post Quantum gravity: we explore spin foams and other potential solutions to this enduring challenge appeared first on Physics World.

    Earlier this autumn I had the pleasure of visiting the Perimeter Institute for Theoretical Physics in Waterloo Canada – where I interviewed four physicists about their research. This is the second of those conversations to appear on the podcast – and it is with Bianca Dittrich, whose research focuses on quantum gravity.

    Albert Einstein’s general theory of relativity does a great job at explaining gravity but it is thought to be incomplete because it is incompatible with quantum mechanics. This is an important shortcoming because quantum mechanics is widely considered to be one of science’s most successful theories.

    Developing a theory of quantum gravity is a crucial goal in physics, but it is proving to be extremely difficult. In this episode, Dittrich explains some of the challenges and talks about ways forward – including her current research on spin foams. We also chat about the intersection of quantum gravity and condensed matter physics; and the difficulties of testing theories against observational data.

    • The first interview in this series from the PI was with Javier Toledo-Marín: “Quantum computing and AI join forces for particle physics”

    IOP Publishing’s new Progress In Series: Research Highlights website offers quick, accessible summaries of top papers from leading journals like Reports on Progress in Physics and Progress in Energy. Whether you’re short on time or just want the essentials, these highlights help you expand your knowledge of leading topics.

    The post Quantum gravity: we explore spin foams and other potential solutions to this enduring challenge appeared first on Physics World.

    https://physicsworld.com/a/quantum-gravity-we-explore-spin-foams-and-other-potential-solutions/
    Hamish Johnston

    Can fast qubits also be robust?

    Spin-orbit interaction adjustment produces "best of both worlds" scenario

    The post Can fast qubits also be robust? appeared first on Physics World.

    National center of competence in research spin
    Qubit central: This work was carried out as part of the National Center of Competence in Research SPIN (NCCR SPIN), which is led by the University of Basel, Switzerland. NCCR SPIN focuses on creating scalable spin qubits in semiconductor nanostructures made of silicon and germanium, with the aim of developing small, fast qubits for a universal quantum computer. (Courtesy: A Efimov)

    Qubits – the building blocks of quantum computers – are plagued with a seemingly unsurmountable dilemma. If they’re fast, they aren’t robust. And if they’re robust, they aren’t fast. Both qualities are important, because all potentially useful quantum algorithms rely on being able to perform many manipulations on a qubit before its state decays. But whereas faster qubits are typically realized by strongly coupling them to the external environment, enabling them to interact more strongly with the driving field, robust qubits with long coherence times are typically achieved by isolating them from their environment.

    These seemingly contradictory requirements made simultaneously fast and robust qubits an unsolved challenge – until now. In an article published in Nature Communications, a team of physicists led by Dominik Zumbühl from the University of Basel, Switzerland show that it is, in fact, possible to increase both the coherence time and operational speed of a qubit, demonstrating a pathway out of this long-standing impasse.

    The magic ingredient

    The key ingredient driving this discovery is something called the direct Rashba spin-orbit interaction. The best-known example of spin-orbit interaction comes from atomic physics. Consider a hydrogen atom, in which a single electron revolves around a single proton in the nucleus. During this orbital motion, the electron interacts with the static electric field generated by the positively charged nucleus. The electron in turn experiences an effective magnetic field that couples to the electron’s intrinsic magnetic moment, or spin. This coupling of the electron’s orbital motion to its spin is called spin-orbit (SO) interaction.

    Aided by collaborators at the University of Oxford, UK and TU Eindhoven in the Netherlands, Zumbühl and colleagues chose to replace this simple SO interaction with a far more complex landscape of electrostatic potential generated by a 10-nanometer-thick germanium wire coated with a thin silicon shell. By removing a single electron from this wire, they create states known as holes that can be used as qubits, with quantum information being encoded in the hole’s spin.

    Importantly, the underlying crystal structure of the silicon-coated germanium wire constrains these holes to discrete energy levels called bands. “If you were to mathematically model a low-level hole residing in one of these bands using perturbation theory – a commonly applied method in which more remote bands are treated as corrections to the ground state – you would find a term that looks structurally similar to the spin–orbit interaction known from atomic physics,” explains Miguel Carballido, who conducted the work during his PhD at Basel, and is now a senior research associate at the University of New South Wales’ School of Electrical Engineering and Telecommunications in Sydney, Australia.

    By encoding the quantum states in these energy levels, the spin-orbit interaction can be used to drive the hole-qubit between its two spin states. What makes this interaction special is that it can be tuned using an external electric field. Thus, by applying a stronger electric field, the interaction can be strengthened – resulting in faster qubit manipulation.

    Comparison of graphs of qubit speed and qubit coherence times, showing showing qubit speed plateauing (top panel) and qubit coherence times peaking (bottom) at an applied electric field around 1330 mV
    Uncompromising performance: Results showing qubit speed plateauing (top panel) and qubit coherence times peaking (bottom) at an applied electric field around 1330 mV, showing that qubit speed and coherence times can be simultaneously optimized. (CC BY ND 4.0 MJ Carballido et al. “Compromise-free scaling of qubit speed and coherence” 2025 Nat. Commun. 16 7616)

    Reaching a plateau

    This ability to make a qubit faster by tuning an external parameter isn’t new. The difference is that whereas in other approaches, a stronger interaction also means higher sensitivity to fluctuations in the driving field, the Basel researchers found a way around this problem. As they increase the electric field, the spin-orbit interaction increases up to a certain point. Beyond this point, any further increase in the electric field will cause the hole to remain stuck within a low energy band. This restricts the hole’s ability to interact with other bands to change its spin, causing the SO interaction strength to drop.

    By tuning the electric field to this peak, they can therefore operate in a “plateau” region where the SO interaction is the strongest, but the sensitivity to noise is the lowest. This leads to high coherence times (see figure), meaning that the qubit remains in the desired quantum state for longer. By reaching this plateau, where the qubit is both fast and robust, the researchers demonstrate the ability to operate their device in the “compromise-free” regime.

    So, is quantum computing now a solved problem? The researchers’ answer is “not yet”, as there are still many challenges to overcome. “A lot of the heavy lifting is being done by the quasi 1D system provided by the nanowire,” remarks Carballido, “but this also limits scalability.” He also notes that the success of the experiment depends on being able to fabricate each qubit device very precisely, and doing this reproducibly remains a challenge.

    The post Can fast qubits also be robust? appeared first on Physics World.

    https://physicsworld.com/a/can-fast-qubits-also-be-robust/
    Yash Wath

    Did cannibal stars and boson stars populate the early universe?

    Objects formed by exotic particles could have created primordial black holes

    The post Did cannibal stars and boson stars populate the early universe? appeared first on Physics World.

    In the early universe, moments after the Big Bang and cosmic inflation, clusters of exotic, massive particles could have collapsed to form bizarre objects called cannibal stars and boson stars. In turn, these could have then collapsed to form primordial black holes – all before the first elements were able to form.

    This curious chain of events is predicted by a new model proposed by a trio of scientists at SISSA, the International School for Advanced Studies in Trieste, Italy.

    Their proposal involves a hypothetical moment in the early universe called the early matter-dominated (EMD) epoch. This would have lasted only a few seconds after the Big Bang, but could have been dominated by exotic particles, such as the massive, supersymmetric particles predicted by string theory.

    “There are no observations that hint at the existence of an EMD epoch – yet!” says SISSA’s Pranjal Ralegankar. “But many cosmologists are hoping that an EMD phase occurred because it is quite natural in many models.”

    Some models of the early universe predict the formation of primordial black holes from quantum fluctuations in the inflationary field. Now, Ralegankar and his colleagues, Daniele Perri and Takeshi Kobayashi propose a new and more natural pathway for forming primordial holes via an EMD epoch.

    They postulate that in the first second of existence when the universe was small and incredibly hot, exotic massive particles emerged and clustered in dense haloes. The SISSA physicists propose that the haloes then collapsed into hypothetical objects called cannibal stars and boson stars.

    Cannibal stars are powered by particles annihilating each other, which would have allowed the objects to resist further gravitational collapse for a few seconds. However, they would not have produced light like normal stars.

    “The particles in a cannibal star can only talk to each other, which is why they are forced to annihilate each other to counter the immense pressure from gravity,” Ralegankar tells Physics World. “They are immensely hot, simply because the particles that we consider are so massive. The temperature of our cannibal stars can range from a few GeV to on the order of 1010 GeV. For comparison, the Sun is on the order of keV.”

    Boson stars, meanwhile, would be made from pure a Bose–Einstein condensate, which is a state of matter whereby the individual particles quantum mechanically act as one.

    Both the cannibal stars and boson stars would exist within larger haloes that would quickly collapse to form primordial black holes with masses about the same as asteroids (about 1014–1019 kg). All of this could have taken place just 10 s after the Big Bang.

    Dark matter possibility

    Ralegankar, Perri and Kobayashi point out that the total mass of primordial black holes that their model produces matches the amount of dark matter in the universe.

    “Current observations rule out black holes to be dark matter, except in the asteroid-mass range,” says Ralegankar. “We showed that our models can produce black holes in that mass range.”

    Richard Massey, who is a dark-matter researcher at Durham University in the UK, agrees that microlensing observations by projects such as the Optical Gravitational Lensing Experiment (OGLE) have ruled out a population of black holes with planetary masses, but not asteroid masses. However, Massey is doubtful that these black holes could make up dark matter.

    “It would be pretty contrived for them to make up a large fraction of what we call dark matter,” he says. “It’s possible that dark matter could be these primordial black holes, but they’d need to have been created with the same mass no matter where they were and whatever environment they were in, and that mass would have to be tuned to evade current experimental evidence.”

    In the coming years, upgrades to OGLE and the launch of NASA’s Roman Space Telescope should finally provide sensitivity to microlensing events produced by objects in the asteroid mass range, allowing researchers to settle the matter.

    It is also possible that cannibal and boson stars exist today, produced by collapsing haloes of dark matter. But unlike those proposed for the early universe, modern cannibal and boson stars would be stable and long-lasting.

    “Much work has already been done for boson stars from dark matter, and we are simply suggesting that future studies should also think about the possibility of cannibal stars from dark matter,” explains Ralegankar. “Gravitational lensing would be one way to search for them, and depending on models, maybe also gamma rays from dark-matter annihilation.”

    The research is described in Physical Review D.

    The post Did cannibal stars and boson stars populate the early universe? appeared first on Physics World.

    https://physicsworld.com/a/did-cannibal-stars-and-boson-stars-populate-the-early-universe/
    No Author

    Academic assassinations are a threat to global science

    Alireza Qaiumzadeh says that science can only exist if scientists are protected as civilians

    The post Academic assassinations are a threat to global science appeared first on Physics World.

    The deliberate targeting of scientists in recent years has become one of the most disturbing, and overlooked, developments in modern conflict. In particular, Iranian physicists and engineers have been singled out for almost two decades, with sometimes fatal consequences. In 2007 Ardeshir Hosseinpour, a nuclear physicist at Shiraz University, died in mysterious circumstances that were widely attributed to poisoning or radioactive exposure.

    Over the following years, at least five more Iranian researchers have been killed. They include particle physicist Masoud Ali-Mohammadi, who was Iran’s representative at the Synchrotron-light for Experimental Science and Applications in the Middle East project. Known as SESAME, it is the only scientific project in the Middle East where Iran and Israel collaborate.

    Others to have died include nuclear engineer Majid Shahriari, another Iranian representative at SESAME, and nuclear physicist Mohsen Fakhrizadeh, who were both killed by bombing or gunfire in Tehran. These attacks were never formally acknowledged, nor were they condemned by international scientific institutions. The message, however, was implicit: scientists in politically sensitive fields could be treated as strategic targets, even far from battlefields.

    What began as covert killings of individual researchers has now escalated, dangerously, into open military strikes on academic communities. Israeli airstrikes on residential areas in Tehran and Isfahan during the 12-day conflict between the two countries in June led to at least 14 Iranian scientists and engineers and members of their family being killed. The scientists worked in areas such as materials science, aerospace engineering and laser physics. I believe this shift, from covert assassinations to mass casualties, crossed a line. It treats scientists as enemy combatants simply because of their expertise.

    The assassinations of scientists are not just isolated tragedies; they are a direct assault on the global commons of knowledge, corroding both international law and international science. Unless the world responds, I believe the precedent being set will endanger scientists everywhere and undermine the principle that knowledge belongs to humanity, not the battlefield.

    Drawing a red line

    International humanitarian law is clear: civilians, including academics, must be protected. Targeting scientists based solely on their professional expertise undermines the Geneva Convention and erodes the civilian–military distinction at the heart of international law.

    Iran, whatever its politics, remains a member of the Nuclear Non-Proliferation Treaty and the International Atomic Energy Agency. Its scientists are entitled under international law to conduct peaceful research in medicine, energy and industry. Their work is no more inherently criminal than research that other countries carry out in artificial intelligence (AI), quantum technology or genetics.

    If we normalize the preemptive assassination of scientists, what stops global rivals from targeting, say, AI researchers in Silicon Valley, quantum physicists in Beijing or geneticists in Berlin? Once knowledge itself becomes a liability, no researcher is safe. Equally troubling is the silence of the international scientific community with organizations such as the UN, UNESCO and the European Research Council as well as leading national academies having not condemned these killings, past or present.

    Silence is not neutral. It legitimizes the treatment of scientists as military assets. It discourages international collaboration in sensitive but essential research and it creates fear among younger researchers, who may abandon high-impact fields to avoid risk. Science is built on openness and exchange, and when researchers are murdered for their expertise, the very idea of science as a shared human enterprise is undermined.

    The assassinations are not solely Iran’s loss. The scientists killed were part of a global community; collaborators and colleagues in the pursuit of knowledge. Their deaths should alarm every nation and every institution that depends on research to confront global challenges, from climate change to pandemics.

    I believe that international scientific organizations should act. At a minimum, they should publicly condemn the assassination of scientists and their families; support independent investigations under international law; as well as advocate for explicit protections for scientists and academic facilities in conflict zones.

    Importantly, voices within Israel’s own scientific community can play a critical role too. Israeli academics, deeply committed to collaboration and academic freedom, understand the costs of blurring the boundary between science and war. Solidarity cannot be selective.

    Recent events are a test case for the future of global science. If the international community tolerates the targeting of scientists, it sets a dangerous precedent that others will follow. What appears today as a regional assault on scientists from the Global South could tomorrow endanger researchers in China, Europe, Russia or the US.

    Science without borders can only exist if scientists are recognized and protected as civilians without borders. That principle is now under direct threat and the world must draw a red line – killing scientists for their expertise is unacceptable. To ignore these attacks is to invite a future in which knowledge itself becomes a weapon, and the people who create it expendable. That is a world no-one should accept.

    The post Academic assassinations are a threat to global science appeared first on Physics World.

    https://physicsworld.com/a/academic-assassinations-are-a-threat-to-global-science/
    No Author

    DNA as a molecular architect

    A new model shows how programmable DNA strands control interactions between diverse colloidal particles

    The post DNA as a molecular architect appeared first on Physics World.

    DNA is a fascinating macromolecule that guides protein production and enables cell replication. It has also found applications in nanoscience and materials design.

    Colloidal crystals are ordered structures made from tiny particles suspended in fluid that can bond to other particles and add functionalisation to materials. By controlling colloidal particles, we can build advanced nanomaterials using a bottom-up approach. There are several ways to control colloidal particle design, ranging from experimental conditions such as pH and temperature to external controls like light and magnetic fields.

    An exciting approach is to use DNA-mediated processes. DNA binds to colloidal surfaces and regulates how the colloids organize, providing molecular-level control. These connections are reversible and can be broken using standard experimental conditions (e.g., temperature), allowing for dynamic and adaptable systems. One important motivation is their good biocompatibility, which has enabled applications in biomedicine such as drug delivery, biosensing, and immunotherapy.

    Programmable Atom Equivalents (PAEs) are large colloidal particles whose surfaces are functionalized with single-stranded DNA, while separate, much smaller DNA-coated linkers, called Electron Equivalents (EEs), roam in solution and mediate bonds between PAEs. In typical PAE-EE systems, the EEs carry multiple identical DNA ends that can all bind the same type of PAE, which limits the complexity of the assemblies and makes it harder to program highly specific connections between different PAE types.

    In this study, the researchers investigate how EEs with arbitrary valency, carrying many DNA arms, regulate interactions in a binary mixture of two types of PAEs. Each EE has multiple single-stranded DNA ends of two different types, each complementary to the DNA on one of the PAE species. The team develops a statistical mechanical model to predict how EEs distribute between the PAEs and to calculate the effective interaction, a measure of how strongly the PAEs attract each other, which in turn controls the structures that can form.

    Using this model, they inform Monte Carlo simulations to predict system behaviour under different conditions. The model shows quantitative agreement with simulation results and reveals an anomalous dependence of PAE-PAE interactions on EE valency, with interactions converging at high valency. Importantly, the researchers identify an optimal valency that maximizes selectivity between targeted and non-targeted binding pairs. This groundbreaking research provides design principles for programmable self-assembly and offers a framework that can be integrated into DNA nanoscience.

    Read the full article

    Designed self-assembly of programmable colloidal atom-electron equivalents

    Xiuyang Xia et al 2025 Rep. Prog. Phys. 88 078101

    Do you want to learn more about this topic?

    Assembly of colloidal particles in solution by Kun Zhao and Thomas G Mason (2018)

    The post DNA as a molecular architect appeared first on Physics World.

    https://physicsworld.com/a/dna-as-a-molecular-architect/
    Lorna Brigham

    The link between protein evolution and statistical physics

    Protein evolution plays a key role in many biological processes that are essential for life – but what does it have to do with physics?

    The post The link between protein evolution and statistical physics appeared first on Physics World.

    Proteins are made up of a sequence of building blocks called amino acids. Understanding these sequences is crucial for studying how proteins work, how they interact with other molecules, and how changes (mutations) can lead to diseases.

    These mutations happen over vastly different time periods and are not completely random but strongly correlated, both in space (distinct sites along the sequences) and in time (subsequent mutations of the same site).

    It turns out that these correlations are very reminiscent of disordered physical systems, notably glasses, emulsions, and foams.

    A team of researchers from Italy and France have now used this similarity to build a new statistical model to simulate protein evolution.  They went on to study the role of different factors causing these mutations.

    They found that the initial (ancestral) protein sequence has a significant influence on the evolution process, especially in the short term. This means that information from the ancestral sequence can be traced back over a certain period and is not completely lost.

    The strength of interactions between different amino acids within the protein affects how long this information persists.

    Although ultimately the team did find differences between the evolution of physical systems and that of protein sequences, this kind of insight would not have been possible without using the language of statistical physics, i.e. space-time correlations.

    The researchers expect that their results will soon be tested in the lab thanks to upcoming advances in experimental techniques.

    Read the full article

    Fluctuations and the limit of predictability in protein evolution – IOPscience

    S. Rossi et al, 2025 Rep. Prog. Phys. 88 078102

    The post The link between protein evolution and statistical physics appeared first on Physics World.

    https://physicsworld.com/a/the-link-between-protein-evolution-and-statistical-physics/
    Paul Mabey

    ‘Caustic’ light patterns inspire new glass artwork

    The piece is based on the research of theoretical physicist Michael Berry

    The post ‘Caustic’ light patterns inspire new glass artwork appeared first on Physics World.

    UK artist Alison Stott has created a new glass and light artwork – entitled Naturally Focused – that is inspired by the work of theoretical physicist Michael Berry from the University of Bristol.

    Stott, who recently competed an MA in glass at Arts University Plymouth, spent over two decades previously working in visual effects for film and television, where she focussed on creating photorealistic imagery.

    Her studies touched on how complex phenomena can arise from seemingly simple set-ups, for example in a rotating glass sculpture lit by LEDs.

    “My practice inhabits the spaces between art and science, glass and light, craft and experience,” notes Stott. “Working with molten glass lets me embrace chaos, indeterminacy, and materiality, and my work with caustics explores the co-creation of light, matter, and perception.”

    The new artwork is based on “caustics” – the curved patterns that form when light is reflected or refracted by curved surfaces or objects

    The focal point of the artwork is a hand-blown glass lens that was waterjet-cut into a circle and polished so that its internal structure and optical behaviour are clearly visible. The lens is suspended within stainless steel gyroscopic rings and held by a brass support and stainless stell backplate.

    The rings can be tilted or rotated to “activate shifting field of caustic projections that ripple across” the artwork. Mathematical equations are also engraved onto the brass that describe the “singularities of light” that are visible on the glass surface.

    The work is inspired by Berry’s research into the relationship between classical and quantum behaviour and how subtle geometric structures govern how waves and particles behave.

    Berry recently won the 2025 Isaac Newton Medal and Prize, which is presented by the Institute of Physics, for his “profound contributions across mathematical and theoretical physics in a career spanning over 60 years”.

    Stott says that working with Berry has pushed her understanding of caustics. “The more I learn about how these structures emerge and why they matter across physics, the more compelling they become,” notes Stott. “My aim is to let the phenomena speak for themselves, creating conditions where people can directly encounter physical behaviour and perhaps feel the same awe and wonder I do.”

    The artwork will go on display at the University of Bristol following a ceremony to be held on 27 November.

    The post ‘Caustic’ light patterns inspire new glass artwork appeared first on Physics World.

    https://physicsworld.com/a/caustic-light-patterns-inspire-new-glass-artwork/
    Michael Banks

    Is your WiFi spying on you?

    "Beamforming feedback information" in latest version of the technology can identify individuals passing through radio networks with almost 100% accuracy, say researchers

    The post Is your WiFi spying on you? appeared first on Physics World.

    WiFi networks could pose significant privacy risks even to people who aren’t carrying or using WiFi-enabled devices, say researchers at the Karlsruhe Institute of Technology (KIT) in Germany. According to their analysis, the current version of the technology passively records information that is detailed enough to identify individuals moving through networks, prompting them to call for protective measures in the next iteration of WiFi standards.

    Although wireless networks are ubiquitous and highly useful, they come with certain privacy and security risks. One such risk stems from a phenomenon known as WiFi sensing, which the researchers at KIT’s Institute of Information Security and Dependability (KASTEL) define as “the inference of information about the networks’ environment from its signal propagation characteristics”.

    “As signals propagate through matter, they interfere with it – they are either transmitted, reflected, absorbed, polarized, diffracted, scattered, or refracted,” they write in their study, which is published in the Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security (CCS ’25). “By comparing an expected signal with a received signal, the interference can be estimated and used for error correction of the received data.”

     An under-appreciated consequence, they continue, is that this estimation contains information about any humans who may have unwittingly been in the signal’s path. By carefully analysing the signal’s interference with the environment, they say, “certain aspects of the environment can be inferred” – including whether humans are present, what they are doing and even who they are.

    “Identity inference attack” is a threat

    The KASTEL team terms this an “identity inference attack” and describes it as a threat that is as widespread as it is serious. “This technology turns every router into a potential means for surveillance,” says Julian Todt, who co-led the study with his KIT colleague Thorsten Strufe. “For example, if you regularly pass by a café that operates a WiFi network, you could be identified there without noticing it and be recognized later – for example by public authorities or companies.”

    While Todt acknowledges that security services, cybercriminals and others do have much simpler ways of tracking individuals – for example by accessing data from CCTV cameras or video doorbells – he argues that the ubiquity of wireless networks lends itself to being co-opted as a near-permanent surveillance infrastructure. There is, he adds, “one concerning property” about wireless networks: “They are invisible and raise no suspicion.”

    Identity of individuals could be extracted using a machine-learning model

    Although the possibility of using WiFi networks in this way is not new, most previous WiFi-based security attacks worked by analysing so-called channel state information (CSI). These data indicate how a radio signal changes when it reflects off walls, furniture, people or animals. However, the KASTEL researchers note that the latest WiFi standard, known as WiFi 5 (802.11ac), changes the picture by enabling a new and potentially easier form of attack based on beamforming feedback information (BFI).

    While beamforming uses similar information as CSI, Todt explains that it does so on the sender’s side instead of the receiver’s. This means that a BFI-based surveillance method would require nothing more than standard devices connected to the WiFi network. “The BFI could be used to create images from different perspectives that might then serve to identify persons that find themselves in the WiFi signal range,” Todt says. “The identity of individuals passing through these radio waves could then be extracted using a machine-learning model. Once trained, this model would make an identification in just a few seconds.”

    In their experiments, Todt and colleagues studied 197 participants as they walked through a WiFi field while being simultaneously recorded with CSI and BFI from four different angles. The participants had five different “walking styles” (such as walking normally and while carrying a backpack) as well as different gaits. The researchers found that they could identify individuals with nearly 100% accuracy, regardless of the recording angle or the individual’s walking style or gait.

    “Risks to our fundamental rights”

    “The technology is powerful, but at the same time entails risks to our fundamental rights, especially to privacy,” says Strufe. He warns that authoritarian states could use the technology to track demonstrators and members of opposition groups, prompting him and his colleagues to “urgently call” for protective measures and privacy safeguards to be included in the forthcoming IEEE 802.11bf WiFi standard.

    “The literature on all novel sensing solutions highlights their utility for various novel applications,” says Todt, “but the privacy risks that are inherent to such sensing are often overlooked, or worse — these sensors are claimed to be privacy-friendly without any rationale for these claims. As such, we feel it necessary to point out the privacy risks that novel solutions such as WiFi sensing bring with them.”

    The researchers say they would like to see approaches developed that can mitigate the risk of identity inference attack. However, they are aware that this will be difficult, since this type of attack exploits the physical properties of the actual WiFi signal. “Ideally, we would influence the WiFi standard to contain privacy-protections in future versions,” says Todt, “but even the impact of this would be limited as there are already millions of WiFi devices out there that are vulnerable to such an attack.”

    The post Is your WiFi spying on you? appeared first on Physics World.

    https://physicsworld.com/a/is-your-wifi-spying-on-you/
    Isabelle Dumé

    Reversible degradation phenomenon in PEMWE cells

    Join the audience for a live webinar on 4 February 2026 sponsored by Scribner Associates, Gamry Instruments and Hiden Analytical, in partnership with The Electrochemical Society

    The post Reversible degradation phenomenon in PEMWE cells appeared first on Physics World.

     

    In proton exchange membrane water electrolysis (PEMWE) systems, voltage cycles dropping below a threshold are associated with reversible performance improvements, which remain poorly understood despite being documented in literature. The distinction between reversible and irreversible performance changes is crucial for accurate degradation assessments. One approach in literature to explain this behaviour is the oxidation and reduction of iridium. Iridium-based electrocatalyst activity and stability in PEMWE hinge on their oxidation state, influenced by the applied voltage. Yet, full-cell PEMWE dynamic performance remains under-explored, with a focus typically on stability rather than activity. This study systematically investigates reversible performance behaviour in PEMWE cells using Ir-black as an anodic catalyst. Results reveal a recovery effect when the low voltage level drops below 1.5 V, with further enhancements observed as the voltage decreases, even with a short holding time of 0.1 s. This reversible recovery is primarily driven by improved anode reaction kinetics, likely due to changing iridium oxidation states, and is supported by alignment between the experimental data and a dynamic model that links iridium oxidation/reduction processes to performance metrics. This model allows distinguishing between reversible and irreversible effects and enables the derivation of optimized operation schemes utilizing the recovery effect.

    Tobias Krenz
    Tobias Krenz

    Tobias Krenz is a simulation and modelling engineer at Siemens Energy in the Transformation of Industry business area focusing on reducing energy consumption and carbon-dioxide emissions in industrial processes. He completed his PhD from Liebniz University Hannover in February 2025. He earned a degree from Berlin University of Applied Sciences in 2017 and a MSc from Technische Universität Darmstadt in 2020.

    Alexander Rex
    Alexander Rex

     

    Alexander Rex is a PhD candidate at the Institute of Electric Power Systems at Leibniz University Hannover. He holds a degree in mechanical engineering from Technische Universität Braunschweig, an MEng from Tongji University, and an MSc from Karlsruhe Institute of Technology (KIT). He was a visiting scholar at Berkeley Lab from 2024 to 2025.

    The post Reversible degradation phenomenon in PEMWE cells appeared first on Physics World.

    https://physicsworld.com/a/reversible-degradation-phenomenon-in-pemwe-cells/
    No Author

    Ramy Shelbaya: the physicist and CEO capitalizing on quantum randomness

    Ramy Shelbaya from Quantum Dice talks about using quantum mechanics to generate random numbers

    The post Ramy Shelbaya: the physicist and CEO capitalizing on quantum randomness appeared first on Physics World.

    Ramy Shelbaya has been hooked on physics ever since he was a 12-year-old living in Egypt and read about the Joint European Torus (JET) fusion experiment in the UK. Biology and chemistry were interesting to him but never quite as “satisfying”, especially as they often seemed to boil down to physics in the end. “So I thought, maybe that’s where I need to go,” Shelbaya recalls.

    His instincts seem to have led him in the right direction. Shelbaya is now chief executive of Quantum Dice, an Oxford-based start-up he co-founded in 2020 to develop quantum hardware for exploiting the inherent randomness in quantum mechanics. It closed its first funding round in 2021 with a seven-figure investment from a consortium of European investors, while also securing grant funding on the same scale.

    Now providing cybersecurity hardware systems for clients such as BT, Quantum Dice is launching a piece of hardware for probabilistic computing, based on the same core innovation. Full of joy and zeal for his work, Shelbaya admits that his original decision to pursue physics was “scary”. Back then, he didn’t know anyone who had studied the subject and was not sure where it might lead.

    The journey to a start-up

    Fortunately, Shelbaya’s parents were onboard from the start and their encouragement proved “incredibly helpful”. His teachers also supported him to explore physics in his extracurricular reading, instilling a confidence in the subject that eventually led Shelbaya to do undergraduate and master’s degrees in physics at École normale supérieure PSL in France.

    He then moved to the UK to do a PhD in atomic and laser physics at the University of Oxford. Just as he was wrapping up his PhD, Oxford University Innovation (OUI) – which manages its technology transfer and consulting activities – launched a new initiative that proved pivotal to Shelbaya’s career.

    Ramy Shelbaya
    From PhD student to CEO Ramy Shelbaya transformed a research idea into a commercial product after winning a competition for budding entrepreneurs. (Courtesy: Quantum Dice)

    OUI had noted that the university generated a lot of IP and research results that could be commercialized but that the academics producing it often favoured academic work over progressing the technology transfer themselves. On the other hand, lots of students were interested in entering the world of business.

    To encourage those who might be business-minded to found their own firms, while also fostering more spin-outs from the university’s patents and research, OUI launched the Student Entrepreneurs’ Programme (StEP). A kind of talent show to match budding entrepreneurs with technology ready for development, StEP invited participants to team up, choose commercially promising research from the university, and pitch for support and mentoring to set up a company.

    As part of Oxford’s atomic and laser physics department, Shelbaya was aware that it had been developing a quantum random number generator. So when the competition was launched, he collaborated with other competition participants to pitch the device. “My team won, and this is how Quantum Dice was born.”

    Random value

    The initial technology was geared towards quantum random number generation, for particular use in cybersecurity. Random numbers are at the heart of all encryption algorithms, but generating truly random numbers has been a stumbling block, with the “pseudorandom” numbers people make do with being prone to prediction and hence security violation.

    Quantum mechanics provides a potential solution because there is inherent randomness in the values of certain quantum properties. Although for a long time this randomness was “a bane to quantum physicists”, as Shelbaya puts it, Quantum Dice and other companies producing quantum random number generators are now harnessing it for useful technologies.

    Where Quantum Dice sees itself as having an edge over its competitors is in its real-time quality assurance of the true quantum randomness of the device’s output. This means it should be able to spot any corruption to the output due to environmental noise or someone tampering with the device, which is an issue with current technologies.

    Quantum Dice already offers Quantum Random Number Generator (QRNG) products in a range of form factors that integrate directly within servers, PCs and hardware security systems. Clients can also access the company’s cloud-based solution –  Quantum Entropy-as-a-Service – which, powered by its QRNG hardware, integrates into the Internet of Things and cloud infrastructure.

    Recently Quantum Dice has also launched a probabilistic computing processor based on its QRNG for use in algorithms centred on probabilities. These are often geared towards optimization problems that apply in a number of sectors, including supply chains and logistics, finance, telecommunications and energy, as well as simulating quantum systems, and Boltzmann machines – a type of energy-based machine learning model for which Shelbaya says researchers “have long sought efficient hardware”.

    Stress testing

    After winning the start-up competition in 2019 things got trickier when Quantum Dice was ready to be incorporated, which occurred just at the start of the first COVID-19 lockdown. Shelbaya moved the prototype device into his living room because it was the only place they could ensure access to it, but it turned out the real challenges lay elsewhere.

    “One of the first things we needed was investments, and really, at that stage of the company, what investors are investing in is you,” explains Shelbaya, highlighting how difficult this is when you cannot meet in person. On the plus side, since all his meetings were remote, he could speak to investors in Asia in the morning, Europe in the afternoon and the US in the evening, all within the same day.

    Another challenge was how to present the technology simply enough so that people would understand and trust it, while not making it seem so simple that anyone could be doing it. “There’s that sweet spot in the middle,” says Shelbaya. “That is something that took time, because it’s a muscle that I had never worked.”

    Due rewards

    The company performed well for its size and sector in terms of securing investments when their first round of funding closed in 2021. Shelbaya is shy of attributing the success to his or even the team’s abilities alone, suggesting this would “underplay a lot of other factors”. These include the rising interest in quantum technologies, and the advantages of securing government grant funding programmes at the same time, which he feels serves as “an additional layer of certification”.

    For Shelbaya every day is different and so are the challenges. Quantum Dice is a small new company, where many of the 17 staff are still fresh from university, so nurturing trust among clients, particularly in the high-stakes world of cybersecurity is no small feat. Managing a group of ambitious, energetic and driven young people can be complicated too.

    But there are many rewards, ranging from seeing a piece of hardware finally work as intended and closing a deal with a client, to simply seeing a team “you have been working to develop, working together without you”.

    For others hoping to follow a similar career path, Shelbaya’s advice is to do what you enjoy – not just because you will have fun but because you will be good at it too. “Do what you enjoy,” he says, “because you will likely be great at it.”

    The post Ramy Shelbaya: the physicist and CEO capitalizing on quantum randomness appeared first on Physics World.

    https://physicsworld.com/a/ramy-shelbaya-the-physicist-and-ceo-capitalizing-on-quantum-randomness/
    Anna Demming

    ‘Patchy’ nanoparticles emerge from new atomic stencilling technique

    Multipurpose structures could find use in targeted drug delivery, catalysis, microelectronics and tissue engineering

    The post ‘Patchy’ nanoparticles emerge from new atomic stencilling technique appeared first on Physics World.

    Researchers in the US and Korea have created nanoparticles with carefully designed “patches” on their surfaces using a new atomic stencilling technique. These patches can be controlled with incredible precision, and could find use in targeted drug delivery, catalysis, microelectronics and tissue engineering.

    The first step in the stencilling process is to create a mask on the surface of gold nanoparticles. This mask prevents a “paint” made from grafted-on polymers from attaching to certain areas of the nanoparticles.

    “We then use iodide ions as a stencil,” explains Qian Chen, a materials scientist and engineer at the University of Illinois at Urbana-Champaign, US, who led the new research effort. “These adsorb (stick) to the surface of the nanoparticles in specific patterns that depend on the shape and atomic arrangement of the nanoparticles’ facets. That’s how we create the patches – the areas where the polymers selectively bind.” Chen adds that she and her collaborators can then tailor the surface chemistry of these tiny patchy nanoparticles in a very controlled way.

    A gap in the field of microfabrication stencilling

    The team decided to develop the technique after realizing there was a gap in the field of microfabrication stencilling. While techniques in this area have advanced considerably in recent years, allowing ever-smaller microdevices to be incorporated into ever-faster computer chips, most of them rely on top-down approaches for precisely controlling nanoparticles. By comparison, Chen says, bottom-up methods have been largely unexplored even though they are low-cost, solution-processable, scalable and compatible with complex, curved and three-dimensional surfaces.

    Reporting their work in Nature, the researchers say they were inspired by the way proteins naturally self-assemble. “One of the holy grails in the field of nanomaterials is making complex, functional structures from nanoscale building blocks,” explains Chen. “It’s extremely difficult to control the direction and organization of each nanoparticle. Proteins have different surface domains, and thanks to their interactions with each other, they can make all the intricate machines we see in biology. We therefore adopted that strategy by creating patches or distinct domains on the surface of the nanoparticles.”

    “Elegant and impressive”

    Philip Moriarty, a physicist of the University of Nottingham, UK who was not involved in the project, describes it as “elegant and impressive” work. “Chen and colleagues have essentially introduced an entirely new mode of self-assembly that allows for much greater control of nanoparticle interactions,” he says, “and the ‘atomic stencil’ concept is clever and versatile.”

    The team, which includes researchers at the University of Michigan, Pennsylvania State University, Cornell, Brookhaven National Laboratory and Korea’s Chonnam National University as well as Urbana-Champaign, agrees that the potential applications are vast. “Since we can now precisely control the surface properties of these nanoparticles, we can design them to interact with their environment in specific ways,” explains Chen. “That opens the door for more effective drug delivery, where nanoparticles can target specific cells. It could also lead to new types of catalysts, more efficient microelectronic components and even advanced materials with unique optical and mechanical properties.”

    She and her colleagues say they now want to extend their approach to different types of nanoparticles and different substrates to find out how versatile it truly is. They will also be developing computational models that can predict the outcome of the stencilling process – something that would allow them to design and synthesize patchy nanoparticles for specific applications on demand.

    The post ‘Patchy’ nanoparticles emerge from new atomic stencilling technique appeared first on Physics World.

    https://physicsworld.com/a/patchy-nanoparticles-emerge-from-new-atomic-stencilling-technique/
    Isabelle Dumé

    Scientists in China celebrate the completion of the underground JUNO neutrino observatory

    The observatory has also released its first results on the so-called solar neutrino tension

    The post Scientists in China celebrate the completion of the underground JUNO neutrino observatory appeared first on Physics World.

    The $330m Jiangmen Underground Neutrino Observatory (JUNO) has released its first results following the completion of the huge underground facility in August.

    JUNO is located in Kaiping City, Guangdong Province, in the south of the country around 150 km west of Hong Kong.

    Construction of the facility began in 2015 and was set to be complete some five years later. Yet the project suffered from serious flooding, which delayed construction.

    JUNO, which is expected to run for more than 30 years, aims to study the relationship between the three types of neutrino: electron, muon and tau. Although JUNO will be able to detect neutrinos produced by supernovae as well as those from Earth, the observatory will mainly measure the energy spectrum of electron antineutrinos released by the Yangjiang and Taishan nuclear power plants, which both lie 52.5 km away.

    To do this, the facility has a 80 m high and 50 m diameter experimental hall located 700 m underground. Its main feature is a 35 m radius spherical neutrino detector, containing 20,000 tonnes of liquid scintillator. When an electron antineutrino occasionally bumps into a proton in the liquid, it triggers a reaction that results in two flashes of light that are detected by the 43,000 photomultiplier tubes that observe the scintillator.

    On 18 November, a paper was submitted to the arXiv preprint server concluding that the detector’s key performance indicators fully meet or surpass design expectations.

    New measurement 

    Neutrinos oscillate from one flavour to another as they travel near the speed of light, rarely interacting with matter. This oscillation is a result of each flavour being a combination of three neutrino mass states.

    Yet scientists do not know the absolute masses of the three neutrinos but can measure neutrino oscillation parameters, known as θ12, θ23 and θ13, as well as the square of the mass differences (Δm2) between two different types of neutrinos.

    A second JUNO paper submitted on 18 November used data collected between 26 August and 2 November to measure the solar neutrino oscillation parameter θ12 and Δm221 with a factor of 1.6 better precision than previous experiments.

    Those earlier results, which used solar neutrinos instead of reactor antineutrinos, showed a 1.5 “sigma” discrepancy with the Standard Model of particle physics. The new JUNO measurements confirmed this difference, dubbed the solar neutrino tension, but further data will be needed to prove or disprove the finding.

    “Achieving such precision within only two months of operation shows that JUNO is performing exactly as designed,” says Yifang Wang from the Institute of High Energy Physics of the Chinese Academy of Sciences, who is JUNO project manager and spokesperson. “With this level of accuracy, JUNO will soon determine the neutrino mass ordering, test the three-flavour oscillation framework, and search for new physics beyond it.”

    JUNO, which is an international collaboration of more than 700 scientists from 75 institutions across 17 countries including China, France, Germany, Italy, Russia, Thailand, and the US, is the second neutrino experiment in China, after the Daya Bay Reactor Neutrino Experiment. It successfully measured a key neutrino oscillation parameter called θ13 in 2012 before being closed down in 2020.

    JUNO is also one of three next-generation neutrino experiments, the other two being the Hyper-Kamiokande in Japan and the Deep Underground Neutrino Experiment in the US. Both are expected to become operational later this decade.

    The post Scientists in China celebrate the completion of the underground JUNO neutrino observatory appeared first on Physics World.

    https://physicsworld.com/a/scientists-in-china-celebrate-the-completion-of-the-underground-juno-neutrino-observatory/
    Michael Banks

    Accelerator experiment sheds light on missing blazar radiation

    Measurement discounts loss from plasma instabilities

    The post Accelerator experiment sheds light on missing blazar radiation appeared first on Physics World.

    New experiments at CERN by an international team have ruled out a potential source of intergalactic magnetic fields. The existence of such fields is invoked to explain why we do not observe secondary gamma rays originating from blazars.

    Led by Charles Arrowsmith at the UK’s University of Oxford, the team suggests the absence of gamma rays could be the result of an unexplained phenomenon that took place in the early universe.

    A blazar is an extraordinarily bright object with a supermassive black hole at its core. Some of the matter falling into the black hole is accelerated outwards in a pair of opposing jets, creating intense beams of radiation. If a blazar jet points towards Earth, we observe a bright source of light including high-energy teraelectronvolt gamma rays.

    During their journey across intergalactic space, these gamma-ray photons will occasionally collide with the background starlight that permeates the universe. These collisions can create cascades of electrons and positrons that can then scatter off photons to create gamma rays in the gigaelectronvolt energy range. These gamma-rays should travel in the direction of the original jet, but this secondary radiation has never been detected.

    Deflecting field

    Magnetic fields could be the reason for this dearth, as Arrowsmith explains: “The electrons and positrons in the pair cascade would be deflected by an intergalactic magnetic field, so if this is strong enough, we could expect these pairs to be steered away from the line of sight to the blazar, along with the reprocessed gigaelectronvolt gamma rays.” It is not clear, however, that such fields exist – and if they do, what could have created them.

    Another explanation for the missing gamma rays involves the extremely sparse plasma that permeates intergalactic space. The beam of electron–positron pairs could interact with this plasma, generating magnetic fields that separate the pairs. Over millions of years of travel, this process could lead to beam–plasma instabilities that reduce the beam’s ability to create gigaelectronvolt gamma rays that are focused on Earth.

    Oxford’s Gianluca Gregori  explains, “We created an experimental platform at the HiRadMat facility at CERN to create electron–positron pairs and transport them through a metre-long ambient argon plasma, mimicking the interaction of pair cascades from blazars with the intergalactic medium”. Once the pairs had passed through the plasma, the team measured the degree to which they had been separated.

    Tightly focused

    Called Fireball, the experiment found that the beams remained far more tightly focused than expected. “When these laboratory results are scaled up to the astrophysical system, they confirm that beam–plasma instabilities are not strong enough to explain the absence of the gigaelectronvolt gamma rays from blazars,” Arrowsmith explains. Unless the pair beam is perfectly collimated, or composed of pairs with exactly equal energies, instabilities were actively suppressed in the plasma.

    While the experiment suggests that an intergalactic magnetic field remains the best explanation for the lack of gamma rays, the mystery is far from solved. Gregori explains, “The early universe is believed to be extremely uniform – but magnetic fields require electric currents, which in turn need gradients and inhomogeneities in the primordial plasma.” As a result, confirming the existence of such a field could point to new physics beyond the Standard Model, which may have dominated in the early universe.

    More information could come with opening of the Cherenkov Telescope Array Observatory. This will comprise ground-based gamma-ray detectors planned across facilities in Spain and Chile, which will vastly improve on the resolutions of current-generation detectors.

    The research is described in PNAS.

    The post Accelerator experiment sheds light on missing blazar radiation appeared first on Physics World.

    https://physicsworld.com/a/accelerator-experiment-sheds-light-on-missing-blazar-radiation/
    No Author

    Why quantum metrology is the driving force for best practice in quantum standardization

    International efforts on standards development will fast-track the adoption and commercialization of quantum technologies

    The post Why quantum metrology is the driving force for best practice in quantum standardization appeared first on Physics World.

    3d render quantum computer featuring qubit chip
    Quantum advantage international standardization efforts will, over time, drive economies of scale and multivendor interoperability across the nascent quantum supply chain. (Courtesy: iStock/Peter Hansen)

    How do standards support the translation of quantum science into at-scale commercial opportunities?

    The standardization process helps to promote the legitimacy of emerging quantum technologies by distilling technical inputs and requirements from all relevant stakeholders across industry, research and government. Put simply: if you understand a technology well enough to standardize elements of it, that’s when you know it’s moved beyond hype and theory into something of practical use for the economy and society.

    What are the upsides of standardization for developers of quantum technologies and, ultimately, for end-users in industry and the public sector?

    Standards will, over time, help the quantum technology industry achieve critical mass on the supply side, with those economies of scale driving down prices and increasing demand. As the nascent quantum supply chain evolves – linking component manufacturers, subsystem developers and full-stack quantum computing companies – standards will also ensure interoperability between products from different vendors and different regions.

    Those benefits flow downstream as well because standards, when implemented properly, increase trust among end-users by defining a minimum quality of products, processes and services. Equally important, as new innovations are rolled out into the marketplace by manufacturers, standards will ensure compatibility across current and next-generation quantum systems, reducing the likelihood of lock-ins to legacy technologies.

    What’s your role in coordinating NPL’s standards effort in quantum science and technology?

    I have strategic oversight of our core technical programmes in quantum computing, quantum networking, quantum metrology and quantum-enabled PNT (position, navigation and timing). It’s a broad-scope remit that spans research, training as well as responsibility for standardization and international collaboration, with the latter often going hand-in-hand.

    Right now, we have over 150 people working within the NPL quantum metrology programme. Their collective focus is on developing the measurement science necessary to build, test and evaluate a wide range of quantum devices and systems. Our research helps innovators, whether in an industry or university setting, to push the limits of quantum technology by providing leading-edge capabilities and benchmarking to measure the performance of new quantum products and services.

    Tim Prior
    Tim Prior “We believe that quantum metrology and standardization are key enablers of quantum innovation.” (Courtesy: NPL)

    It sounds like there are multiple layers of activity.

    That’s right. For starters, we have a team focusing on the inter-country strategic relationships, collaborating closely with colleagues at other National Metrology Institutes (like NIST in the US and PTB in Germany). A key role in this regard is our standards specialist who, given his background working in the standards development organizations (SDOs), acts as a “connector” between NPL’s quantum metrology teams and, more widely, the UK’s National Quantum Technology Programme and the international SDOs.

    We also have a team of technical experts who sit on specialist working groups within the SDOs. Their inputs to standards development are not about NPL’s interests, rather providing expertise and experience gained from cutting-edge metrology; also building a consolidated set of requirements gathered from stakeholders across the quantum community to further the UK’s strategic and technical priorities in quantum.

    So NPL’s quantum metrology programme provides a focal point for quantum standardization?

    Absolutely. We believe that quantum metrology and standardization are key enablers of quantum innovation, fast-tracking the adoption and commercialization of quantum technologies while building confidence among investors and across the quantum supply chain and early-stage user base. For NPL and its peers, the task right now is to agree on the terminology and best practice as we figure out the performance metrics, benchmarks and standards that will enable quantum to go mainstream.

    How does NPL engage the UK quantum community on standards development?

    Front-and-centre is the UK Quantum Standards Network Pilot. This initiative – which is being led by NPL – brings together representatives from industry, academia and government to work on all aspects of standards development: commenting on proposals and draft standards; discussing UK standards policy and strategy; and representing the UK in the European and international SDOs. The end-game? To establish the UK as a leading voice in quantum standardization, both strategically and technically, and to ensure that UK quantum technology companies have access to global supply chains and markets.

    What about NPL outreach to prospective end-users of quantum technologies?

    The Quantum Standards Network Pilot also provides a direct line to prospective end-users of quantum technologies in business sectors like finance, healthcare, pharmaceuticals and energy. What’s notable is that the end-users are often preoccupied with questions that link in one way or another to standardization. For example: how well do quantum technologies stack up against current solutions? Are quantum systems reliable enough yet? What does quantum cost to implement and maintain, including long-term operational costs? Are there other emerging technologies that could do the same job? Is there a solid, trustworthy supply chain?

    It’s clear that international collaboration is mandatory for successful standards development. What are the drivers behind the recently announced NMI-Q collaboration?

    The quantum landscape is changing fast, with huge scope for disruptive innovation in quantum computing, quantum communications and quantum sensing. Faced with this level of complexity, NMI-Q leverages the combined expertise of the world’s leading National Metrology Institutes – from the G7 countries and Australia – to accelerate the development and adoption of quantum technologies.

    No one country can do it all when it comes to performance metrics, benchmarks and standards in quantum science and technology. As such, NMI-Q’s priorities are to conduct collaborative pre-standardization research; develop a set of “best measurement practices” needed by industry to fast-track quantum innovation; and, ultimately, shape the global standardization effort in quantum. NPL’s prominent role within NMI-Q (I am the co-chair along with Barbara Goldstein of NIST) underscores our commitment to evidence-based decision-making in standards development and, ultimately, to the creation of a thriving quantum ecosystem.

    What are the attractions of NPL’s quantum programme for early-career physicists?

    Every day, our measurement scientists address cutting-edge problems in quantum – as challenging as anything they’ll have encountered previously in an academic setting. What’s especially motivating, however, is that the NPL is a mission-driven endeavour with measurement outcomes linking directly to wider societal and economic benefits – not just in the UK, but internationally as well.

    Quantum metrology: at your service

    Measurement for Quantum (M4Q) is a flagship NPL programme that provides industry partners with up to 20 days of quantum metrology expertise to address measurement challenges in applied R&D and product development. The service – which is free of charge for projects approved after peer review – helps companies to bridge the gap from technology prototype to full commercialization.

    To date, more than two-thirds of the companies to participate in M4Q report that their commercial opportunity has increased as a direct result of NPL support. In terms of specifics, the M4Q offering includes the following services:

    • Small-current and quantum-noise measurements
    • Measurement of material-induced noise in superconducting quantum circuits
    • Nanoscale imaging of physical properties for applications in quantum devices
    • Characterization of single-photon sources and detectors
    • Characterization of compact lasers and other photonic components
    • Semiconductor device characterisation at cryogenic temperatures

    Apply for M4Q support here.

    Further reading

    Performance metrics and benchmarks point the way to practical quantum advantage

    End note: NPL retains copyright on this article.

    The post Why quantum metrology is the driving force for best practice in quantum standardization appeared first on Physics World.

    https://physicsworld.com/a/why-quantum-metrology-is-the-driving-force-for-best-practice-in-quantum-standardization/
    No Author

    Ask me anything: Jason Palmer – ‘Putting yourself in someone else’s shoes is a skill I employ every day’

    Jason Palmer talks about how a career in journalism offers a variety of opportunities, but you have to be okay with not being the expert in the room

    The post Ask me anything: Jason Palmer – ‘Putting yourself in someone else’s shoes is a skill I employ every day’ appeared first on Physics World.

    What skills do you use every day in your job?

    One thing I can say for sure that I got from working in academia is the ability to quickly read, summarize and internalize information from a bunch of sources. Journalism requires a lot of that. Being able to skim through papers – reading the abstract, reading the conclusion, picking the right bits from the middle and so on – that is a life skill.

    In terms of other skills, I’m always considering who’s consuming what I’m doing rather than just thinking about how I’d like to say something. You have to think about how it’s going to be received – what’s the person on the street going to hear? Is this clear enough? If I were hearing this for the first time, would I understand it? Putting yourself in someone else’s shoes – be it the listener, reader or viewer – is a skill I employ every day.

    What do you like best and least about your job?

    The best thing is the variety. I ended up in this business and not in scientific research because of a desire for a greater breadth of experience. And boy, does this job have it. I get to talk to people around the world about what they’re up to, what they see, what it’s like, and how to understand it. And I think that makes me a much more informed person than I would be had I chosen to remain a scientist.

    When I did research – and even when I was a science journalist – I thought “I don’t need to think about what’s going on in that part of the world so much because that’s not my area of expertise.” Now I have to, because I’m in this chair every day. I need to know about lots of stuff, and I like that feeling of being more informed.

    I suppose what I like the least about my job is the relentlessness of it. It is a newsy time. It’s the flip side of being well informed, you’re forced to confront lots of bad things – the horrors that are going on in the world, the fact that in a lot of places the bad guys are winning.

    What do you know today that you wish you knew when you were starting out in your career?

    When I started in science journalism, I wasn’t a journalist – I was a scientist pretending to be one. So I was always trying to show off what I already knew as a sort of badge of legitimacy. I would call some professor on a topic that I wasn’t an expert in yet just to have a chat to get up to speed, and I would spend a bunch of time showing off, rabbiting on about what papers I’d read and what I knew, just to feel like I belonged in the room or on that call. And it’s a waste of time. You have to swallow your ego and embrace the idea that you may sound like you don’t know stuff even if you do. You might sound dumber, but that’s okay – you’ll learn more and faster, and you’ll probably annoy people less.

    In journalism in particular, you don’t want to preload the question with all of the things that you already know because then the person you’re speaking to can fill in those blanks – and they’re probably going to talk about things you didn’t know you didn’t know, and take your conversation in a different direction.

    It’s one of the interesting things about science in general. If you go into a situation with experts, and are open and comfortable about not knowing it all, you’re showing that you understand that nobody can know everything and that science is a learning process.

    The post Ask me anything: Jason Palmer – ‘Putting yourself in someone else’s shoes is a skill I employ every day’ appeared first on Physics World.

    https://physicsworld.com/a/ask-me-anything-jason-palmer-putting-yourself-in-someone-elses-shoes-is-a-skill-i-employ-every-day/
    Hamish Johnston

    Sympathetic cooling gives antihydrogen experiment a boost

    Having more antimatter could help solve profound mysteries of physics

    The post Sympathetic cooling gives antihydrogen experiment a boost appeared first on Physics World.

    Physicists working on the Antihydrogen Laser Physics Apparatus (ALPHA) experiment at CERN have trapped and accumulated 15,000 antihydrogen atoms in less than 7 h. This accumulation rate is more than 20 times the previous record. Large ensembles of antihydrogen could be used to search for tiny, unexpected differences between matter and antimatter – which if discovered could point to physics beyond the Standard Model.

    According to the Standard Model every particle has an antimatter counterpart – or antiparticle. It also says that roughly equal amounts of matter and antimatter were created in the Big Bang. But, today there is much more matter than antimatter in the visible universe, and the reason for this “baryon asymmetry” is one of the most important mysteries of physics.

    The Standard Model predicts the properties of antiparticles. An antiproton, for example, has the same mass as a proton and the opposite charge. The Standard Model also predicts how antiparticles interact with matter and antimatter. If physicists could find discrepancies between the measured and predicted properties of antimatter, it could help explain the baryon asymmetry and point to other new physics beyond the Standard Model.

    Powerful probe

    Just as a hydrogen atom comprises a proton bound to an electron, an antihydrogen antiatom comprises an antiproton bound to an antielectron (positron). Antihydrogen offers physicists several powerful ways to probe antimatter at a fundamental level. Trapped antiatoms can be released in freefall to determine if they respond to gravity in the same way as atoms. Spectroscopy can be used to make precise measurements of how the electromagnetic force binds the antiproton and positron in antihydrogen with the aim of finding differences compared to hydrogen.

    So far, antihydrogen’s gravitational and electromagnetic properties appear to be identical to hydrogen. However, these experiments were done using small numbers of antiatoms, and having access to much larger ensembles would improve the precision of such measurements and could reveal tiny discrepancies. However, creating and storing antihydrogen is very difficult.

    Today, antihydrogen can only be made in significant quantities at CERN in Switzerland. There, a beam of protons is fired at a solid target, creating antiprotons that are then cooled and stored using electromagnetic fields. Meanwhile, positrons are gathered from the decay of radioactive nuclei and cooled and stored using electromagnetic fields. These antiprotons and positrons are then combined in a special electromagnetic trap to create antihydrogen.

    This process works best when the antiprotons and positrons have very low kinetic energies (temperatures) when combined. If the energy is too high, many antiatoms will be escape the trap. So, it is crucial that the positrons and antiprotons to be as cold as possible.

    Sympathetic cooling

    Recently, ALPHA physicists have used a technique called sympathetic cooling on positrons, and in a new paper they describe their success.  Sympathetic cooling has been used for several decades to cool atoms and ions. It originally involved mixing a hard-to-cool atomic species with atoms that are relatively easy to cool using lasers. Energy is transferred between the two species via the electromagnetic interaction, which chills the hard-to-cool atoms.

    The ALPHA team used beryllium ions to sympathetically cool positrons to 10 K, which is five degrees colder than previously achieved using other techniques. These cold positrons boosted the efficiency of the creation and trapping of antihydrogen, allowing the team to accumulate 15,000 antihydrogen atoms in less than 7 h. This is more than a 20-fold improvement over their previous record of accumulating 2000 antiatoms in 24 h.

    Science fiction

    “These numbers would have been considered science fiction 10 years ago,” says ALPHA spokesperson Jeffrey Hangst, who is a Denmark’s Aarhus University.

    Team member Maria Gonçalves, a PhD student at the UK’s Swansea University, says, “This result was the culmination of many years of hard work. The first successful attempt instantly improved the previous method by a factor of two, giving us 36 antihydrogen atoms”.

    The effort was led by Niels Madsen of the UK’s Swansea University. He enthuses, “It’s more than a decade since I first realized that this was the way forward, so it’s incredibly gratifying to see the spectacular outcome that will lead to many new exciting measurements on antihydrogen”.

    The cooling technique is described in Nature Communications.

    The post Sympathetic cooling gives antihydrogen experiment a boost appeared first on Physics World.

    https://physicsworld.com/a/sympathetic-cooling-gives-antihydrogen-experiment-a-boost/
    Hamish Johnston

    Plasma bursts from young stars could shed light on the early life of the Sun

     New multi-temperature coronal mass ejection observations might help us better understand how life emerged and evolved on Earth

    The post Plasma bursts from young stars could shed light on the early life of the Sun appeared first on Physics World.

    The Sun frequently ejects high-energy bursts of plasma that then travel through interplanetary space. These so-called coronal mass ejections (CMEs) are accompanied by strong magnetic fields, which, when they interact with the Earth’s atmosphere, can trigger solar storms that can severely damage satellite systems and power grids.

    In the early days of the solar system, the Sun was far more active than it is today and ejected much bigger CMEs. These might have been energetic enough to affect our planet’s atmosphere and therefore influence how life emerged and evolved on Earth, according to some researchers.

    Since it is impossible to study the early Sun, astronomers use proxies – that is, stars that resemble it. These “exo-suns” are young G-, K- and M-type stars and are far more active than our Sun is today. They frequently produce CMEs with energies far larger than the most energetic solar flares recorded in recent times, which might not only affect their planets’ atmospheres, but may also affect the chemistry on these planets.

    Until now, direct observational evidence for eruptive CME-like phenomena on young solar analogues has been limited. This is because clear signatures of stellar eruptions are often masked by the brightness of their host stars and flares on these. Measurements of Doppler shifts in optical lines have allowed astronomers to detect a few possible stellar eruptions associated with giant superflares on a young solar analogue, but these detections have been limited to single-wavelength data at “low temperatures” of around 104 K. Studies at higher temperatures have been few and far between. And although scientists have tried out promising techniques, such as X-ray and UV dimming, to advance their understanding of these “cool” stars, few simultaneous multi-wavelength observations have been made.

    A large Carrington-class flare from EK Draconis

    On 29 March 2024, astronomers at Kyoto University in Japan detected a large Carrington-class flare – or superflare – in the far-ultraviolet from EK Draconis, a G-type star located approximately 112 light-years away from the Sun. Thanks to simultaneous observations in the ultraviolet and optical ranges of the electromagnetic spectrum, they say they have now been able to obtain the first direct evidence for a multi-temperature CME from this young solar analogue (which is around 50 to 125 million years old and has a radius similar to the Sun).

    The researchers’ campaign spanned four consecutive nights from 29 March to 1 April 2024. They made their ultraviolet observations with the Hubble Space Telescope and the Transiting Exoplanet Survey Satellite (TESS) and performed optical monitoring using three ground-based telescopes in Japan, Korea and the US.

    They found that the far-ultraviolet and optical lines were Doppler shifted during and just before the superflare, with the ultraviolet observations showing blueshifted emission indicative of hot plasma. About 10 minutes later, the optical telescopes observed blueshifted absorption in the hydrogen Hα line, which indicates cooler gases. According to the team’s calculations, the hot plasma had a temperature of 100 000 K and was ejected at speeds of 300–550 km/s, while the “cooler” gas (with a temperature of 10 000 K) was ejected at 70 km/s.

    “These findings imply that it is the hot plasma rather than the cool plasma that carries kinetic energy into planetary space,” explains study leader Kosuke Namekata. “The existence of this plasma suggests that such CMEs from our Sun in the past, if frequent and strong, could have driven shocks and energetic particles capable of eroding or chemically altering the atmosphere of the early Earth and the other planets in our solar system.”

    “The discovery,” he tells Physics World, “provides the first observational link between solar and stellar eruptions, bridging stellar astrophysics, solar physics and planetary science.”

    Looking forward, the researchers, who report their work in Nature Astronomy, now plan to conduct similar, multiwavelength campaigns on other young solar analogues to determine how frequently such eruptions occur and how they vary from star to star.

    “In the near future, next-generation ultraviolet space telescopes such as JAXA’s LAPYUTA and NASA’s ESCAPADE, coordinated with ground-based facilities, will allow us to trace these events more systematically and understand their cumulative impact on planetary atmospheres,” says Namekata.

    The post Plasma bursts from young stars could shed light on the early life of the Sun appeared first on Physics World.

    https://physicsworld.com/a/plasma-bursts-from-young-stars-could-shed-light-on-the-early-life-of-the-sun/
    Isabelle Dumé

    Flattened halo of dark matter could explain high-energy ‘glow’ at Milky Way’s heart

    Finding brings us a step closer to solving the mystery of dark matter, say astronomers

    The post Flattened halo of dark matter could explain high-energy ‘glow’ at Milky Way’s heart appeared first on Physics World.

    Astronomers have long puzzled over the cause of a mysterious “glow” of very high energy gamma radiation emanating from the centre of our galaxy. One possibility is that dark matter – the unknown substance thought to make up more than 25% of the universe’s mass – might be involved. Now, a team led by researchers at Germany’s Leibniz Institute for Astrophysics Potsdam (AIP) says that a flattened rather than spherical distribution of dark matter could account for the glow’s properties, bringing us a step closer to solving the mystery.

    Dark matter is believed to be responsible for holding galaxies together. However, since it does not interact with light or other electromagnetic radiation, it can only be detected through its gravitational effects. Hence, while astrophysical and cosmological evidence has confirmed its presence, its true nature remains one of the greatest mysteries in modern physics.

    “It’s extremely consequential and we’re desperately thinking all the time of ideas as to how we could detect it,” says Joseph Silk, an astronomer at Johns Hopkins University in the US and the Institut d’Astrophysique de Paris and Sorbonne University in France who co-led this research together with the AIP’s Moorits Mihkel Muru. “Gamma rays, and specifically the excess light we’re observing at the centre of our galaxy, could be our first clue.”

    Models might be too simple

    The problem, Muru explains, is that the way scientists have usually modelled dark matter to account for the excess gamma-ray radiation in astronomical observations was highly simplified. “This, of course, made the calculations easier, but simplifications always fuzzy the details,” he says. “We showed that in this case, the details are important: we can’t model dark matter as a perfectly symmetrical cloud and instead have to take into account the asymmetry of the cloud.”

    Muru adds that the team’s findings, which are detailed in Phys. Rev. Lett., provide a boost to the “dark matter annihilation” explanation of the excess radiation. According to the standard model of cosmology, all galaxies – including our own Milky Way – are nested inside huge haloes of dark matter. The density of this dark matter is highest at the centre, and while it primarily interacts through gravity, some models suggest that it could be made of massive, neutral elementary particles that are their own antimatter counterparts. In these dense regions, therefore, such dark matter species could be mutually annihilating, producing substantial amounts of radiation.

    Pierre Salati, an emeritus professor at the Université Savoie Mont Blanc, France, who was not involved in this work, says that in these models, annihilation plays a crucial role in generating a dark matter component with an abundance that agrees with cosmological observations. “Big Bang nucleosynthesis sets stringent bounds on these models as a result of the overall concordance between the predicted elemental abundances and measurements, although most models do survive,” Salati says. “One of the most exciting aspects of such explanations is that dark matter species might be detected through the rare antimatter particles – antiprotons, positrons and anti-deuterons – that they produce as they currently annihilate inside galactic halos.”

    Silvia Manconi of the Laboratoire de Physique Théorique et Hautes Energies (LPTHE), France, who was also not involved in the study, describes it as “interesting and stimulating”. However, she cautions that – as is often the case in science – reality is probably more complex than even advanced simulations can capture. “This is not the first time that galaxy simulations have been used to study the implications of the excess and found non-spherical shapes,” she says, though she adds that the simulations in the new work offer “significant improvements” in terms of their spatial resolution.

    Manconi also notes that the study does not demonstrate how the proposed distribution of dark matter would appear in data from the Fermi Gamma-ray Space Telescope’s Large Area Telescope (LAT), or how it would differ quantitatively from observations of a distribution of old stars. Forthcoming observations with radio telescopes such as MeerKat and FAST, she adds, may soon identify pulsars in this region of the galaxy, shedding further light on other possible contributions to the excess of gamma rays.

    New telescopes could help settle the question

    Muru acknowledges that better modelling and observations are still needed to rule out other possible hypotheses. “Studying dark matter is very difficult, because it doesn’t emit or block light, and despite decades of searching, no experiment has yet detected dark matter particles directly,” he tells Physics World. “A confirmation that this observed excess radiation is caused by dark matter annihilation through gamma rays would be a big leap forward.”

    New gamma-ray telescopes with higher resolution, such as the Cherenkov Telescope Array, could help settle this question, he says. If these telescopes, which are currently under construction, fail to find star-like sources for the glow and only detect diffuse radiation, that would strengthen the alternative dark matter annihilation explanation.

    Muru adds that a “smoking gun” for dark matter would be a signal that matches current theoretical predictions precisely. In the meantime, he and his colleagues plan to work on predicting where dark matter should be found in several of the dwarf galaxies that circle the Milky Way.

    “It’s possible we will see the new data and confirm one theory over the other,” Silk says. “Or maybe we’ll find nothing, in which case it’ll be an even greater mystery to resolve.”

    The post Flattened halo of dark matter could explain high-energy ‘glow’ at Milky Way’s heart appeared first on Physics World.

    https://physicsworld.com/a/flattened-halo-of-dark-matter-could-explain-high-energy-glow-at-milky-ways-heart/
    Isabelle Dumé

    Talking physics with an alien civilization: what could we learn?

    Do Aliens Speak Physics? author Daniel Whiteson is our podcast guest

    The post Talking physics with an alien civilization: what could we learn? appeared first on Physics World.

    It is book week here at Physics World and over the course of three days we are presenting conversations with the authors of three fascinating and fun books about physics. Today, my guest is the physicist Daniel Whiteson, who along with the artist Andy Warner has created the delightful book Do Aliens Speak Physics?.

    Is physics universal, or is it shaped by human perspective? This will be a very important question if and when we are visited by an advanced alien civilization. Would we recognize our visitors’ alien science – or indeed, could a technologically-advanced civilization have no science at all? And would we even be able to communicate about science with our alien guests?

    Whiteson, who is a particle physicist at the University of California Irvine, tackles these profound questions and much more in this episode of the Physics World Weekly podcast.

    APS logo

     

    This episode is supported by the APS Global Physics Summit, which takes place on 15–20 March, 2026, in Denver, Colorado, and online.

    The post Talking physics with an alien civilization: what could we learn? appeared first on Physics World.

    https://physicsworld.com/a/talking-physics-with-an-alien-civilization-what-could-we-learn/
    Hamish Johnston

    International Quantum Year competition for science journalists begins

    Physics World and Physics Magazine launch a quantum competition for delegates at the 2025 World Conference of Science Journalists

    The post International Quantum Year competition for science journalists begins appeared first on Physics World.

    Are you a science writer attending the 2025 World Conference of Science Journalists (WCSJ) in Pretoria, South Africa? To mark the International Year of Quantum Science and Technology, Physics World (published by the Institute of Physics) and Physics Magazine (published by the American Physical Society) are teaming up to host a special Quantum Pitch Competition for WCSJ attendees.

    The two publications invite journalists to submit story ideas on any aspect of quantum science and technology. At least two selected pitches will receive paid assignments and be published in one of the magazines.

    Interviews with physicists and career profiles – either in academia or industry – are especially encouraged, but the editors will also consider news stories, podcasts, visual media and other creative storytelling formats that illuminate the quantum world for diverse audiences.

    Participants should submit a brief pitch (150–300 words recommended), along with a short journalist bio and a few representative clips, if available. Editors from Physics World and Physics Magazine will review all submissions and announce the winning pitches after the conference. Pitches should be submitted to physics@aps.org by 8 December 2025, with the subject line “2025WCSJ Quantum Pitch”.

    Whether you’re drawn to quantum materials, computing, sensing or the people shaping the field, this is an opportunity to feature fresh voices and ideas in two leading physics publications.

    This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

    Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

    Find out more on our quantum channel.

    The post International Quantum Year competition for science journalists begins appeared first on Physics World.

    https://physicsworld.com/a/international-quantum-year-competition-for-science-journalists-begins/
    Matin Durrani

    New cylindrical metamaterials could act as shock absorbers for sensitive equipment

    Topological kagome tubes isolate vibrations to one end, keeping the other end safe

    The post New cylindrical metamaterials could act as shock absorbers for sensitive equipment appeared first on Physics World.

    A 3D-printed structure called a kagome tube could form the backbone of a new system for muffling damaging vibrations. The structure is part of a class of materials known as topological mechanical metamaterials, and unlike previous materials in this group, it is simple enough to be deployed in real-world situations. According to lead developer James McInerney of the Wright-Patterson Air Force Base in Ohio, US, it could be used as shock protection for sensitive systems found in civil and aerospace engineering applications.

    McInerney and colleagues’ tube-like design is made from a lattice of beams arranged in such a way that low-energy vibrational modes called floppy modes become localized to one side. “This provides good properties for isolating vibrations because energy input into the system on the floppy side does not propagate to the other side,” McInerney says.

    The key to this desirable behaviour, he explains, is the arrangement of the beams that form the lattice structure. Using a pattern first proposed by the 19th century physicist James Clerk Maxwell, the beams are organized into repeating sub-units to form stable, two-dimensional structures known as topological Maxwell lattices.

    Self-supporting design

    Previous versions of these lattices could not support their own weight. Instead, they were attached to rigid external mounts, making it impractical to integrate them into devices. The new design, in contrast, is made by folding a flat Maxwell lattice into a cylindrical tube that is self-supporting. The tube features a connected inner and outer layer – a kagome bilayer – and its radius can be precisely engineered to give it the topological behaviour desired.

    The researchers, who detail their work in Physical Review Applied, first tested their structure numerically by attaching a virtual version to a mechanically sensitive sample and a source of low-energy vibrations. As expected, the tube diverted the vibrations away from the sample and towards the other end of the tube.

    Next, they developed a simple spring-and-mass model to understand the tube’s geometry by considering it as a simple monolayer. This modelling indicated that the polarization of the tube should be similar to the polarization of the monolayer. They then added rigid connectors to the tube’s ends and used a finite-element method to calculate the frequency-dependent patterns of vibrations propagating across the structure. They also determined the effective stiffness of the lattice as they applied loads parallel and perpendicular to it.

    The researchers are targeting vibration-isolation applications that would benefit from a passive support structure, especially in cases where the performance of alternative passive mechanisms, such as viscoelastomers, is temperature-limited. “Our tubes do not necessarily need to replace other vibration isolation mechanisms,” McInerney explains. “Rather, they can enhance the capabilities of these by having the load-bearing structure assist with isolation.”

    The team’s first and most important task, McInerney adds, will be to explore the implications of physically mounting the kagome tube on its vibration isolation structures. “The numerical study in our paper uses idealized mounting conditions so that the input and output are perfectly in phase with the tube vibrations,” he says. “Accounting for the potential impedance mismatch between the mounts and the tube will enable us to experimentally validate our work and provide realistic design scenarios.”

    The post New cylindrical metamaterials could act as shock absorbers for sensitive equipment appeared first on Physics World.

    https://physicsworld.com/a/new-cylindrical-metamaterials-could-act-as-shock-absorbers-for-sensitive-equipment/
    Isabelle Dumé

    Breakfast physics, delving into quantum 2.0, the science of sound, an update to everything: micro reviews of recent books

    Condensed natter: Physics World editors give their compressed verdicts on top new books

    The post Breakfast physics, delving into quantum 2.0, the science of sound, an update to everything: micro reviews of recent books appeared first on Physics World.

    Physics Around the Clock: Adventures in the Science of Everyday Living
    By Michael Banks

    Why do Cheerios tend to stick together while floating in a bowl of milk? Why does a runner’s ponytail swing side to side? These might not be the most pressing questions in physics, but getting to the answers is both fun and provides insights into important scientific concepts. These are just two examples of everyday physics that Physics World news editor Michael Banks explores in his book Physics Around the Clock, which begins with the physics (and chemistry) of your morning coffee and ends with a formula for predicting the winner of those cookery competitions that are mainstays of evening television. Hamish Johnston

    • 2025 The History Press
    • You can hear from Michael Banks talking about his book on the Physics World Weekly podcast

     

    Quantum 2.0: the Past, Present and Future of Quantum Physics
    By Paul Davies

    You might wonder why the world needs yet another book about quantum mechanics, but for physicists there’s no better guide than Paul Davies. Based for the last two decades at Arizona State University in the US, in Quantum 2.0 Davies tackles the basics of quantum physics – along with its mysteries, applications and philosophical implications – with great clarity and insight. The book ends with truly strange topics such as quantum Cheshire cats and delayed-choice quantum erasers – see if you prefer his descriptions to those we’ve attempted in Physics World this year. Matin Durrani

    • 2025 Pelican
    • You can hear from Paul Davies in the November episode of the Physics World Stories podcast

     

    Can You Get Music on the Moon? the Amazing Science of Sound and Space
    By Sheila Kanani, illustrated by Liz Kay

    Why do dogs bark but wolves howl? How do stars “sing”? Why does thunder rumble? This delightful, fact-filled children’s book answers these questions and many more, taking readers on an adventure through sound and space. Written by planetary scientist Sheila Kanani and illustrated by Liz Kay, Can you get Music on the Moon? reveals not only how sound is produced but why it can make us feel certain things. Each of the 100 or so pages brims with charming illustrations that illuminate the many ways that sound is all around us. Michael Banks

    • 2025 Puffin Books

     

    A Short History of Nearly Everything 2.0
    By Bill Bryson

    Alongside books such as Stephen Hawking’s A Brief History of Time and Carl Sagan’s Cosmos, British-American author Bill Bryson’s A Short History of Nearly Everything is one of the bestselling popular-science books of the last 50 years. First published in 2003, the book became a fan favourite of readers across the world and across disciplines as Bryson wove together a clear and humorous narrative of our universe. Now, 22 years later, he has released an updated and revised volume – A Short History of Nearly Everything 2.0 – that covers major updates in science from the past two decades. This includes the discovery of the Higgs boson and the latest on dark-matter research. The new edition is still imbued with all the wit and wisdom of the original, making it the perfect Christmas present for scientists and anyone else curious about the world around us. Tushna Commissariat

    • 2025 Doubleday

    The post Breakfast physics, delving into quantum 2.0, the science of sound, an update to everything: micro reviews of recent books appeared first on Physics World.

    https://physicsworld.com/a/breakfast-physics-delving-into-quantum-2-0-the-science-of-sound-an-update-to-everything-micro-reviews-of-recent-books/
    No Author

    Quantum 2.0: Paul Davies on the next revolution in physics

    Peering into a near future where quantum science blends with AI. What are the implications for science, society and the arts?

    The post Quantum 2.0: Paul Davies on the next revolution in physics appeared first on Physics World.

    In this episode of Physics World Stories, theoretical physicist, cosmologist and author Paul Davies discusses his latest book, Quantum 2.0: the Past, Present and Future of Quantum Physics. A Regents Professor at Arizona State University, Davies reflects on how the first quantum revolution transformed our understanding of nature – and what the next one might bring.

    He explores how emerging quantum technologies are beginning to merge with artificial intelligence, raising new ethical and philosophical questions. Could quantum AI help tackle climate change or tackle issues like hunger? And how far should we go in outsourcing planetary management to machines that may well prioritize their own survival?

    Davies also turns his gaze to the arts, imagining a future where quantum ideas inspire music, theatre and performance. From jazz improvized by quantum algorithms to plays whose endings depend on quantum outcomes, creativity itself could enter a new superposition.

    Hosted by Andrew Glester, this episode blends cutting-edge science and imagination in trademark Paul Davies style.

    This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

    Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

    Find out more on our quantum channel.

     

    The post Quantum 2.0: Paul Davies on the next revolution in physics appeared first on Physics World.

    https://physicsworld.com/a/quantum-2-0-paul-davies-on-the-next-revolution-in-physics/
    James Dacey

    Flexible electrodes for the future of light detection

    By tuning the work function of PEDOT:PSS electrodes, researchers enhance photodetector efficiency and adaptability, advancing the future of optoelectronic systems

    The post Flexible electrodes for the future of light detection appeared first on Physics World.

    Photodetectors convert light into electrical signals and are essential in technologies ranging from consumer electronics and communications to healthcare. They also play a vital role in scientific research. Researchers are continually working to improve their sensitivity, response speed, spectral range, and design efficiency.

    Since the discovery of graphene’s remarkable electrical properties, there has been growing interest in using graphene and other two-dimensional (2D) materials to advance photodetection technologies. When light interacts with these materials, it excites electrons that must travel to a nearby contact electrode to generate an electrical signal. The ease with which this occurs depends on the work functions of the materials involved, specifically, the difference between them, known as the Schottky barrier height. Selecting an optimal combination of 2D material and electrode can minimize this barrier, enhancing the photodetector’s sensitivity and speed. Unfortunately, traditional electrode materials have fixed work functions which are limiting 2D photodetector technology.

    PEDOT:PSS is a widely used electrode material in photodetectors due to its low cost, flexibility, and transparency. In this study, the researchers have developed PEDOT:PSS electrodes with tunable work functions ranging from 5.1 to 3.2 eV, making them compatible with a variety of 2D materials and ideal for optimizing device performance in metal-semiconductor-metal architectures. In addition, their thorough investigation demonstrates that the produced photodetectors performed excellently, with a significant forward current flow (rectification ratio ~10⁵), a strong conversion of light to electrical output (responsivity up to 1.8 A/W), and an exceptionally high Ilight/Idark ratio of 10⁸. Furthermore, the detectors were highly sensitive with low noise, had very fast response times (as fast as 3.2 μs), and thanks to the transparency of PEDOT:PSS, showed extended sensitivity into the near-infrared region.

    This study demonstrates a tunable, transparent polymer electrode that enhances the performance and versatility of 2D photodetectors, offering a promising path toward flexible, self-powered, and wearable optoelectronic systems, and paving the way for next-generation intelligent interactive technologies.

    Read the full article

    A homogenous polymer design with widely tunable work functions for high-performance two-dimensional photodetectors

    Youchen Chen et al 2025 Rep. Prog. Phys. 88 068003

    Do you want to learn more about this topic?

    Two-dimensional material/group-III nitride hetero-structures and devices by Tingting Lin, Yi Zeng, Xinyu Liao, Jing Li, Changjian Zhou and Wenliang Wang (2025)

    The post Flexible electrodes for the future of light detection appeared first on Physics World.

    https://physicsworld.com/a/flexible-electrodes-for-the-future-of-light-detection/
    Lorna Brigham

    Quantum cryptography in practice

    A research team from China have proposed a new, experimentally feasible, method to encrypt messages using the principles of quantum mechanics

    The post Quantum cryptography in practice appeared first on Physics World.

    Quantum Conference Key Agreement (QCKA) is a cryptographic method that allows multiple parties to establish a shared secret key using quantum technology. This key can then be used for secure communication among the parties.

    Unlike traditional methods that rely on classical cryptographic techniques, QCKA leverages the principles of quantum mechanics, particularly multipartite entanglement, to ensure security.

    A key aspect of QCKA is creating and distributing entangled quantum states among the parties. These entangled states have unique properties that make it impossible for an eavesdropper to intercept the key without being detected.

    Researchers measure the efficiency and performance of the key agreement protocol using a metric known as the key rate.

    One problem with state-of-the-art QCKA schemes is that this key rate decreases exponentially with the number of users.

    Previous solutions to this problem, based on single-photon interference, have come at the cost of requiring global phase locking. This makes them impractical to put in place experimentally.

    However, the authors of this new study have been able to circumvent this requirement, by adopting an asynchronous pairing strategy. Put simply, this means that measurements taken by different parties in different places do not need to happen at exactly at the same time.

    Their solution effectively removes the need for global phase locking while still maintaining the favourable scaling of the key rate as in other protocols based on single-photon interference.

    The new scheme represents an important step towards realising QCKA at long distances by allowing for much more practical experimental configurations.

    Quantum conference key agreement
    Schematic representation of quantum group network via circular asynchronous interference (Courtesy: Hua-Lei Yin)

    Read the full article

    Repeater-like asynchronous measurement-device-independent quantum conference key agreement – IOPscience

    Yu-Shuo Lu et al., 2025 Rep. Prog. Phys. 88 067901

    The post Quantum cryptography in practice appeared first on Physics World.

    https://physicsworld.com/a/quantum-cryptography-in-practice/
    Paul Mabey

    Scientists realize superconductivity in traditional semiconducting material

    Superconducting germanium could find application in a new generation of quantum devices

    The post Scientists realize superconductivity in traditional semiconducting material appeared first on Physics World.

    Superconducting germanium:gallium trilayer
    Coherent crystalline interfaces Atomic-resolution image of a superconducting germanium:gallium (Ge:Ga) trilayer with alternating Ge:Ga and silicon layers demonstrating precise control of atomic interfaces. (Courtesy: Salva Salmani-Rezaie)

    The ability to induce superconductivity in materials that are inherently semiconducting has been a longstanding research goal. Improving the conductivity of semiconductor materials could help develop quantum technologies with a high speed and energy efficiency, including superconducting quantum bits (qubits) and cryogenic CMOS control circuitry. However, this task has proved challenging in traditional semiconductors – such as silicon or germanium – as it is difficult to maintain the optimal superconductive atomic structure.

    In a new study, published in Nature Nanotechnology, researchers have used molecular beam epitaxy (MBE) to grow gallium-hyperdoped germanium films that retain their superconductivity. When asked about the motivation for this latest work, Peter Jacobson from the University of Queensland tells Physics World about his collaboration with Javad Shabani from New York University.

    “I had been working on superconducting circuits when I met Javad and discovered the new materials their team was making,” he explains. “We are all trying to understand how to control materials and tune interfaces in ways that could improve quantum devices.”

    Germanium: from semiconductor to superconductor

    Germanium is a group IV element, so its properties bridge those of both metals and insulators. Superconductivity can be induced in germanium by manipulating its atomic structure to introduce more electrons into the atomic lattice. These extra electrons interact with the germanium lattice to create electron pairs that move without resistance, or in other words, they become superconducting.

    Hyperdoping germanium (at concentrations well above the solid solubility limit) with gallium induces a superconducting state. However, this material is traditionally unstable due to the presence of structural defects, dopant clustering and poor thickness control. There have also been many questions raised as to whether these materials are intrinsically superconducting, or whether it is actually gallium clusters and unintended phases that are solely responsible for the superconductivity of gallium-doped germanium.

    Considering these issues and looking for a potential new approach, Jacobson notes that X-ray absorption measurements at the Australian Synchrotron were “the first real sign” that Shabani’s team had grown something special. “The gallium signal was exceptionally clean, and early modelling showed that the data lined up almost perfectly with a purely substitutional picture,” he explains. “That was a genuine surprise. Once we confirmed and extended those results, it became clear that we could probe the mechanism of superconductivity in these films without the usual complications from disorder or spurious phases.”

    Epitaxial growth improves superconductivity control

    In a new approach, Jacobson, Shabani and colleagues used MBE to grow the crystals instead of relying on ion implantation techniques, allowing the germanium to by hyperdoped with gallium. Using MBE forces the gallium atoms to replace germanium atoms within the crystal lattice at levels much higher than previously seen. The process also provided better control over parasitic heating during film growth, allowing the researchers to achieve the structural precision required to understand and control the superconductivity of these germanium:gallium (Ge:Ga) materials, which were found to become superconducting at 3.5 K with a carrier concentration of 4.15 × 1021 holes/cm3. The critical gallium dopant threshold to achieve this was 17.9%.

    Using synchrotron-based X-ray absorption, the team found that the gallium dopants were substitutionally incorporated into the germanium lattice and induced a tetragonal distortion to the unit cell. Density functional theory calculations showed that this causes a shift in the Fermi level into the valence band and flattens electronic bands. This suggests that the structural order of gallium in the germanium lattice creates a narrow band that facilitates superconductivity in germanium, and that this superconductivity arises intrinsically in the germanium, rather than being governed by defects and gallium clusters.

    The researchers tested trilayer heterostructures – Ge:Ga/Si/Ge:Ga and Ge:Ga/Ge/Ge:Ga – as proof-of-principle designs for vertical Josephson junction device architectures. In the future, they hope to develop these into fully fledged Josephson junction devices.

    Commenting on the team’s future plans for this research, Jacobson concludes: “I’m very keen to examine this material with low-temperature scanning tunnelling microscopy (STM) to directly measure the superconducting gap, because STM adds atomic-scale insights that complement our other measurements and will help clarify what sets hyperdoped germanium apart”.

    The post Scientists realize superconductivity in traditional semiconducting material appeared first on Physics World.

    https://physicsworld.com/a/scientists-realize-superconductivity-in-traditional-semiconducting-material/
    No Author

    Better coffee, easier parking and more: the fascinating physics of daily life

    The author of Physics Around the Clock is our podcast guest

    The post Better coffee, easier parking and more: the fascinating physics of daily life appeared first on Physics World.

    It is book week here at Physics World and over the course of three days we are presenting conversations with the authors of three fascinating and fun books about physics. First up is my Physics World colleague Michael Banks, whose book Physics Around the Clock: Adventures in the Science of Everyday Living starts with your morning coffee and ends with a formula for making your evening television viewing more satisfying.

    As well as the rich physics of coffee, we chat about strategies for finding the best parking spot and the efficient boarding of aeroplanes. If you have ever wondered why a runner’s ponytail swings from side-to-side when they reach a certain speed – we have the answer for you.

    Other daily mysteries that we explore include how a hard steel razor blade can be dulled by cutting relatively soft hairs and why quasiparticles called “jamitons” are helping physicists understand the spontaneous appearance of traffic jams. And a warning for squeamish listeners, we do talk about the amazing virus-spreading capabilities of a flushing toilet.

    APS logo

     

    This episode is supported by the APS Global Physics Summit, which takes place on 15–20 March, 2026, in Denver, Colorado, and online.

    The post Better coffee, easier parking and more: the fascinating physics of daily life appeared first on Physics World.

    https://physicsworld.com/a/better-coffee-easier-parking-and-more-the-fascinating-physics-of-daily-life/
    Hamish Johnston

    Cosmic dawn: the search for the primordial hydrogen signal

    Sarah Wild talks to astronomers across the world who are on a hunt for a subtle hydrogen signal that could confirm or disprove our ideas on the universe’s evolution

    The post Cosmic dawn: the search for the primordial hydrogen signal appeared first on Physics World.

    “This is one of the big remaining frontiers in astronomy,” says Phil Bull, a cosmologist at the Jodrell Bank Centre for Astrophysics at the University of Manchester. “It’s quite a pivotal era of cosmic history that, it turns out, we don’t actually understand.”

    Bull is referring to the vital but baffling period in the early universe – from 380,000 years to one billion years after the Big Bang – when its structure went from simple to complex. To lift the veil on this epoch, experiments around the world – from Australia to the Arctic – are racing to find a specific but elusive signal from the earliest hydrogen atoms. This signal could confirm or disprove scientists’ theories of how the universe evolved and the physics that governs it.

    Hydrogen is the most abundant element in the universe. As neutral hydrogen atoms change states, they can emit or absorb photons. This spectral transition, which can be stimulated by radiation, produces an emission or absorption radio wave signal with a wavelength of 21 cm. To find out what happened during that early universe, astronomers are searching for these 21 cm photons that were emitted by primordial hydrogen atoms.

    But despite more teams joining the hunt every year, no-one has yet had a confirmed detection of this radiation. So who will win the race to find this signal and how is the hunt being carried out?

    A blank spot

    Let’s first return to about 380,000 years after the Big Bang, when the universe had expanded and cooled to below 3000 K. At this stage, neutral atoms, including atomic hydrogen, could form. Thanks to the absence of free electrons, ordinary matter particles could decouple from light, allowing it to travel freely across the universe. This ancient radiation that permeates the sky is known as the cosmic microwave background (CMB).

    But after that we don’t know much about what happened for the next few hundred million years. Meanwhile, the oldest known galaxy MoM-z14 – which existed about 280 million years after the Big Bang – was observed in April 2025 by the James Webb Space Telescope. So there is currently a gap of just under 280 million years in our observations of the early universe. “It’s one of the last blank spots,” says Anastasia Fialkov, an astrophysicist at the Institute of Astronomy of the University of Cambridge.

    This “blank spot” is a bridge between the early, simple universe and today’s complex structured cosmos. During this early epoch, the universe went from being filled with a thick cloud of neutral hydrogen, to being diversely populated with stars, black holes and everything in between. It covers the end of the cosmic dark ages, the cosmic dawn, and the epoch of reionization – and is arguably one of the most exciting periods in our universe’s evolution.

    During the cosmic dark ages, after the CMB flooded the universe, the only “ordinary” matter (made up of protons, neutrons and electrons) was neutral hydrogen (75% by mass) and neutral helium (25%), and there were no stellar structures to provide light. It is thought that gravity then magnified any slight fluctuations in density, causing some of this primordial gas to clump and eventually form the first stars and galaxies – a time called the cosmic dawn. Next came the epoch of reionization, when ultraviolet and X-ray emissions from those first celestial objects heated and ionized the hydrogen atoms, turning the neutral gas into a charged plasma of electrons and protons.

    Stellar imprint

    The 21 cm signal astronomers are searching for was produced when the spectral transition was excited by collisions in the hydrogen gas during the dark ages and then by the first photons from the first stars during the cosmic dawn. However, the intensity of the 21 cm signal can only be measured against the CMB, which acts as a steady background source of 21 cm photons.

    When the hydrogen was colder than the background radiation, there were few collisions, and the atoms would have absorbed slightly more 21 cm photons from the CMB than they emitted themselves. The 21 cm signal would appear as a deficit, or absorption signal, against the CMB. But when the neutral gas was hotter than the CMB, the atoms would emit more photons than they absorbed, causing the 21 cm signal to be seen as a brighter emission against the CMB. These absorption and emission rates depend on the density and temperature of the gas, and the timing and intensity of radiation from the first cosmic sources. Essentially, the 21 cm signal became imprinted with how those early sources transformed the young universe.

    One way scientists are trying to observe this imprint is to measure the average – or “global” – signal across the sky, looking at how it shifts from absorption to emission compared to the CMB. Normally, a 21 cm radio wave signal has a frequency of about 1420 MHz. But this ancient signal, according to theory, has been emitted and absorbed at different intensities throughout this cosmic “blank spot”, depending on the universe’s evolutionary processes at the time. The expanding universe has also stretched and distorted the signal as it travelled to Earth. Theories predict that it would now be in the 1 to 200 MHz frequency range – with lower frequencies corresponding to older eras – and would have a wavelength of metres rather than centimetres.

    Importantly, the shape of the global 21 cm signal over time could confirm the lambda-cold dark matter (ΛCDM) model, which is the most widely accepted theory of the cosmos; or it could upend it. Many astronomers have dedicated their careers to finding this radiation, but it is challenging for a number of reasons.

    Unfortunately, the signal is incredibly faint. Its brightness temperature, which is measured as the change in the CMB’s black body temperature (2.7 K), will only be in the region of 0.1 K.

    1 The 21 cm signal across cosmic time

    The 21 cm signal across cosmic time
    (a CC BY 4.0 The Royal Society/A Fialkov et al. 2024 Philos. Trans. A Math. Phys. Eng. Sci. 382 20230068; b Copyright Springer Nature. Reused with permission from E de Lera Acedo et al. 2022 Nature Astronomy 6 984)

    a A simulation of the sky-averaged (global) signal as a function of time (horizontal) and space (vertical). b A typical model of the global 21 cm line with the main cosmic events highlighted. Each experiment searching for the global 21 cm signal focuses on a particular frequency band. For example, the Radio Experiment for the Analysis of Cosmic Hydrogen (REACH) is looking at the 50–170 MHz range (blue).

    There is also no single source of this emission, so, like the CMB, it permeates the universe. “If it was the only signal in the sky, we would have found it by now,” says Eloy de Lera Acedo, head of Cavendish Radio Astronomy and Cosmology at the University of Cambridge. But the universe is full of contamination, with the Milky Way being a major culprit. Scientists are searching for 0.1 K in an environment “that’s a million times brighter”, he explains.

    And even before this signal reaches the radio-noisy Earth, it has to travel through the atmosphere, which further distorts and contaminates it. “It’s a very difficult measurement,” says Rigel Cappallo, a research scientist at the MIT Haystack Observatory. “It takes a really, really well calibrated instrument that you understand really well, plus really good modelling.”

    Seen but not confirmed

    In 2018 the Experiment to Detect the Global EoR Signature (EDGES) – a collaboration between Arizona State University and MIT Haystack Observatory – hit the headlines when it claimed to have detected the global 21 cm signal (Nature 555 67).

    The EDGES instrument is a dipole antenna, which resembles a ping-pong table with a gap in the middle (see photo at top of article for the 2024 set-up). It is mounted on a large metal groundsheet, which is about 30 × 30 m. Its ground-breaking observation was made at a remote site in western Australia, far from radio frequency interference.

    But in the intervening seven years, no-one else has been able to replicate the EDGES results.

    The spectrum dip that EDGES detected was very different from what theorists had expected. “There is a whole family of models that are predicted by the different cosmological scenarios,” explains Ravi Subrahmanyan, a research scientist at Australia’s national science agency CSIRO. “When we take measurements, we compare them with the models, so that we can rule those models in or out.”

    In general, the current models predict a very specific envelope of signal possibilities (see figure 1). First, they anticipate an absorption dip in brightness temperature of around 0.1 to 0.2 K, caused by the temperature difference between the cold hydrogen gas (in an expanding universe) and the warmer CMB. Then, a speedy rise and photon emission is predicted as the gas starts to warm when the first stars form, and the signal should spike dramatically when the first X-ray binary stars fire up and heat up the surrounding gas. The signal is then expected to fade as the epoch of reionization begins, because ionized particles cannot undergo the spectral transition. With models, scientists theorize when this happened, how many stars there were, and how the cosmos unfurled.

    2 Weird signal

    The 21 cm signals predicted by standard cosmology (coloured lines
    (Courtesy: SARAS Team)

    The 21 cm signals predicted by current cosmology models (coloured lines) and the detection by the EDGES experiment (dashed black line).

    “It’s just one line, but it packs in so many physical phenomena,” says Fialkov, referring to the shape of the 21 cm signal’s brightness temperature over time. The timing of the dip, its gradient and magnitude all represent different milestones in cosmic history, which affect how it evolved.

    The EDGES team, however, reported a dip of more than double the predicted size, at about 78 MHz (see figure 2). While the frequency was consistent with predictions, the very wide and deep dip of the signal took the community by surprise.

    “It would be a revolution in physics, because that signal will call for very, very exotic physics to explain it,” says de Lera Acedo. “Of course, the first thing we need to do is to make sure that that is actually the signal.”

    A spanner in the works

    The EDGES claim has galvanized the cosmology community. “It set a cat among the pigeons,” says Bull. “People realized that, actually, there’s some very exciting science to be done here.” Some groups are trying to replicate the EDGES observation, while others are trying new approaches to detect the signal that the models promise.

    The Radio Experiment for the Analysis of Cosmic Hydrogen (REACH) – a collaboration between the University of Cambridge and Stellenbosch University in South Africa – focuses on the 50–170 MHz frequency range. Sitting on the dry and empty plains of South Africa’s Northern Cape, it is targeting the EDGES observation (Nature Astronomy 6 984).

    A large metal mesh topped with two antennas, in a desert
    The race to replicate REACH went online in the Karoo region of South Africa in December 2023. (Courtesy: Saurabh Pegwal, REACH collaboration)

    In this radio-quiet environment, REACH has set up two antennas: one looks like EDGES’ dipole ping-pong table, while the other is a spiral cone. They sit on top of a giant metallic mesh – the ground plate – in the shape of a many-pointed star, which aims to minimize reflections from the ground.

    Hunting for this signal “requires precision cosmology and engineering”, says de Lera Acedo, the principal investigator on REACH. Reflections from the ground or mesh, calibration errors, and signals from the soil, are the kryptonite of cosmic dawn measurements. “You need to reduce your systemic noise, do better analysis, better calibration, better cleaning [to remove other sources from observations],” he says.

    Desert, water, snow

    Another radio telescope, dubbed the Shaped Antenna measurement of the background Radio Spectrum (SARAS) – which was established in the late 2000s by the Raman Research Institute (RRI) in Bengaluru, India – has undergone a number of transformations to reduce noise and limit other sources of radiation. Over time, it has morphed from a dipole on the ground to a metallic cone floating on a raft. It is looking at 40 to 200 MHz (Exp. Astron. 51 193).

    After the EDGES claim, SARAS pivoted its attention to verifying the detection, explains Saurabh Singh, a research scientist at the RRI. “Initially, we were not able to get down to the required sensitivity to be able to say anything about their detection,” he explains. “That’s why we started floating our radiometer on water.” Buoying the experiment reduces ground contamination and creates a more predictable surface to include in calculations.

    Four photos of the SARAS telescope with different designs and in different locations
    Floating telescope Evolution of the SARAS experiment and sites up to 2020. The third edition of the telescope, SARAS 3, was deployed on lakes to further reduce radio interference. (Courtesy: SARAS Team)

    Using data from their floating radiometer, in 2022 Singh and colleagues disfavoured EDGES’ claim (Nature Astronomy 6 607), but for many groups the detection still remains a target for observations.

    While SARAS has yet to detect a cosmic-dawn signal of its own, Singh says that non-detection is also an important element of finding the global 21 cm signal. “Non-detection gives us an opportunity to rule out a lot of these models, and that has helped us to reject a lot of properties of these stars and galaxies,” he says.

    Raul Monsalve Jara – a cosmologist at the University of California, Berkeley – has been part of the EDGES collaboration since 2012, but decided to also explore other ways to detect the signal. “My view is that we need several experiments doing different things and taking different approaches,” he says.

    The Mapper of the IGM Spin Temperature (MIST) experiment, of which Monsalve is co-principal investigator, is a collaboration between Chilean, Canadian, Australian and American researchers. These instruments are looking at 25 to 105 MHz (MNRAS 530 4125). “Our approach was to simplify the instrument, get rid of the metal ground plate, and to take small, portable instruments to remote locations,” he explains. These locations have to fulfil very specific requirements – everything around the instrument, from mountains to the soil, can impact the instrument’s performance. “If the soil itself is irregular, that will be very difficult to characterize and its impact will be difficult to remove [from observations],” Monsalve says.

    Two photos of a small portable radio telescope – in a snowy Arctic region and in a hot desert
    Physics on the move MIST conducts measurements of the sky-averaged radio spectrum at frequencies below 200 MHz. Its monopole and dipole variants are highly portable and have been deployed in some of the most remote sites on Earth, including the Arctic (top) and the Nevada desert (bottom). (Courtesy: Raul Monsalve)

    So far, the MIST instrument, which is also a dipole ping-pong table, has visited a desert in California, another in Nevada, and even the Arctic. Each time, the researchers spend a few weeks at the site collecting data, and it is portable and easy to set up, Monsalve explains. The team is planning more observations in Chile. “If you suspect that your environment could be doing something to your measurements, then you need to be able to move around,” continues Monsalve. “And we are contributing to the field by doing that.”

    Aaron Parsons, also from the University of California, Berkeley, decided that the best way to detect this elusive signal would be to try and eliminate the ground entirely – by suspending a rotating antenna over a giant canyon with 100 m empty space in every direction.

    His Electromagnetically Isolated Global Signal Estimation Platform (EIGSEP) includes an antenna hanging four storeys above the ground, attached to Kevlar cable strung across a canyon in Utah. It’s observing at 50 to 250 MHz. “It continuously rotates around and twists every which way,” Parsons explains. Hopefully, that will allow them to calibrate the instrument very accurately. Two antennas on the ground cross-correlate observations. EIGSEP began making observations last year.

    More experiments are expected to come online in the next year. The Remote HI eNvironment Observer (RHINO), an initiative of the University of Manchester, will have a horn-shaped receiver made of a metal mesh that is usually used to construct skyscrapers. Horn shapes are particularly good for calibration, allowing for very precise measurements. The most famous horn-shaped antenna is Bell Laboratories’ Holmdel Horn Antenna in the US, with which two scientists accidentally discovered the CMB in 1965.

    Initially, RHINO will be based at Jodrell Bank Observatory in the UK, but like other experiments, it could travel to other remote locations to hunt for the 21 cm signal.

    Similarly, Subrahmanyan – who established the SARAS experiment in India and is now with CSIRO in Australia – is working to design a new radiometer from scratch. The instrument, which will focus on 40–160 MHz, is called Global Imprints from Nascent Atoms to Now (GINAN). He says that it will feature a recently patented self-calibrating antenna. “It gives a much more authentic measurement of the sky signal as measured by the antenna,” he explains.

    In the meanwhile, the EDGES collaboration has not been idle. MIT Haystack Observatory’s Cappallo project manages EDGES, which is currently in its third iteration. It is still the size of a desk, but its top now looks like a box, with closed sides and its electronics tucked inside, and an even larger metal ground plate. The team has now made observations from islands in the Canadian archipelago and in Alaska’s Aleutian island chain (see photo at top of article).

    “The 2018 EDGES result is not going to be accepted by the community until somebody completely independently verifies it,” Cappallo explains. “But just for our own sanity and also to try to improve on what we can do, we want to see it from as many places as possible and as many conditions as possible.” The EDGES team has replicated its results using the same data analysis pipeline, but no-one else has been able to reproduce the unusual signal.

    All the astronomers interviewed welcomed the introduction of new experiments. “I think it’s good to have a rich field of people trying to do this experiment because nobody is going to trust any one measurement,” says Parsons. “We need to build consensus here.”

    Taking off

    Some astronomers have decided to avoid the struggles of trying to detect the global 21 cm signal from Earth – instead, they have their sights set on the Moon. Earth’s atmosphere is one of the reasons why the 21 cm signal is so difficult to measure. The ionosphere, a charged region of the atmosphere, distorts and contaminates this incredibly faint signal. On the far side of the Moon, any antenna would also be shielded from the cacophony of radio-frequency interference from Earth.

    “This is why some experiments are going to the Moon,” says Parsons, adding that he is involved in NASA’s LuSEE-Night experiment. LuSEE-Night, or the Lunar Surface Electromagnetics Experiment, aims to land a low-frequency experiment on the Moon next year.

    In July, at the National Astronomical Meeting in Durham, the University of Cambridge’s de Lera Acedo presented a proposal to put a miniature radiometer into lunar orbit. Dubbed “Cosmocube”, it will be a nanosatellite that will orbit the Moon searching for this 21 cm signal.

    Illustration of a satellite with sails
    Taking the hunt to space Provisional illustration of the CosmoCube with its antenna deployed for the 21 cm signal detection, i.e. in operational mode in space. This nanosatellite would travel to the far side of the Moon to get away from the Earth’s ionosphere, which introduces substantial distortions and absorption effects to any radio signal detection. (CC BY 4.0 Artuc and de Lera Acedo 2024 RAS Techniques and Instruments 4 rzae061)

    “It is just in the making,” says de Lera Acedo, adding that it will not be in operation for at least a decade. “But it is the next step.”

    In the meanwhile, groups here on Earth are in a race to detect this elusive signal. The instruments are getting more sensitive, the modelling is improving, and the unknowns are reducing. “If we do the experiments right, we will find the signal,” Monsalve believes. The big question is who, of the many groups with their hat in the ring, is doing the experiment “right”.

    The post Cosmic dawn: the search for the primordial hydrogen signal appeared first on Physics World.

    https://physicsworld.com/a/cosmic-dawn-the-search-for-the-primordial-hydrogen-signal/
    No Author

    Ten-ion system brings us a step closer to large-scale qubit registers

    Each ion is uniquely entangled with a photon

    The post Ten-ion system brings us a step closer to large-scale qubit registers appeared first on Physics World.

    Photo of the members of Ben Lanyon's research group
    Team effort Based at the University of Innsbruck, Ben Lanyon’s group has created a novel qubit register by trapping ten ions. (Courtesy: Victor Krutyanskiy/University of Innsbruck)

    Researchers in Austria have entangled matter-based qubits with photonic qubits in a ten-ion system. The technique is scalable to larger ion-qubit registers, paving the way for the creation of larger and more complex quantum networks.

    Visualization of the ten ion quantum
    Ions in motion Each ion (large object) is moved one at a time into the “sweet spot” of the optical cavity. Once there, a laser beam drives the emission of a single photon (small object), entangled with the ion. The colours indicate ion–photon entanglement. (Courtesy: Universität Innsbruck/Harald Ritsch)

    Quantum networks consist of matter-based nodes that store and process quantum information and are linked through photons (quanta of light). Already, Ben Lanyon’s group at the University of Innsbruck has made advances in this direction by entangling two ions in different systems. Now, in a new paper published in Physical Review Letters , they describe how they have developed and demonstrated a new method to entangle a string of ten ions with photons. In the future, this approach could enable the entanglement of sets of ions in different locations through light, rather than one ion at a time.

    To achieve this, Lanyon and colleagues trapped a chain of 10 calcium ions in a linear trap in an optical cavity. By changing the trapping voltages in the trap, each ion was moved, one-by-one, into the cavity. Once inside, the ion was placed in the “sweet spot”, where the ion’s interaction with the cavity is the strongest. There, the ion  emitted a single photon when exposed to a 393 nm Raman laser beam. This beam was tightly focused on one ion, guaranteeing that the emitted photon – collected in a single-mode optical fibre – comes out from one ion at a time. This process was carried out ten times, one per ion, to obtain a train of ten photons.

    By using quantum state tomography, the researchers reconstructed the density matrix, which describes the correlation between the states of ions (i) and photons (j).  To do so, they measure every ion and photon state in three different basis, resulting in nine Pauli-basis configurations of quantum measurements. From the density matrix, the concurrence (a measure of entanglement) between the ion (i) and photon (j) was found to be positive only when  i = j, and equal to zero otherwise. This implies that the ion is uniquely entangled with the photon it produced, and unentangled with the photon produced by other ions.

    From the density matrix, they also calculate the fidelity with the Bell state (a state of maximum entanglement), yielding an average 92%. As Marco Canteri points out, “this fidelity characterizes the quality of entanglement between the ion-photon pair for i=j”.

    This work developed and demonstrated a technique whereby matter-based qubits and photonic qubits can be entangled, one  at a time, in ion strings.  Now, the group aims to “demonstrate universal quantum logic within the photon-interfaced 10-ion register and, building up towards entangling two remote 10-ion processors through the exchange of photons between them,” explains team member Victor Krutyanskiy. If this method effectively scales to larger systems, more complex quantum networks could be built. This would lead to applications in quantum communication and quantum sensing.

    The post Ten-ion system brings us a step closer to large-scale qubit registers appeared first on Physics World.

    https://physicsworld.com/a/ten-ion-system-brings-us-a-step-closer-to-large-scale-qubit-registers/
    Nohora Hernández

    Non-invasive wearable device measures blood flow to the brain

    Speckle contrast optical spectroscopy provides a cost-effective way to assess cerebral blood flow for diagnosis of neurological disorders

    The post Non-invasive wearable device measures blood flow to the brain appeared first on Physics World.

    Measuring blood flow to the brain is essential for diagnosing and developing treatments for neurological disorders such as stroke, vascular dementia or traumatic brain injury. Performing this measurement non-invasively is challenging, however, and achieved predominantly using costly MRI and nuclear medicine imaging techniques.

    Emerging as an alternative, modalities based on optical transcranial measurement are cost-effective and easy to use. In particular, speckle contrast optical spectroscopy (SCOS) – an offshoot of laser speckle contrast imaging, which uses laser light speckles to visualize blood vessels – can measure cerebral blood flow (CBF) with high temporal resolution, typically above 30 Hz, and cerebral blood volume (CBV) through optical signal attenuation.

    Researchers at the California Institute of Technology (Caltech) and the Keck School of Medicine’s USC Neurorestoration Center have designed a lightweight SCOS system that accurately measures blood flow to the brain, distinguishing it from blood flow to the scalp. Co-senior author Charles Liu of the Keck School of Medicine and team describe the system and their initial experimentation with it in APL Bioengineering.

    Detection channels in a speckle contrast optical spectroscopy system
    Seven simultaneous measurements Detection channels with differing source-to-detector distances monitor blood dynamics in the scalp, skull and brain layers. (Courtesy: CC BY 4.0/APL Bioeng. 10.1063/5.0263953)

    The SCOS system consists of a 3D-printed head mount designed for secure placement over the temple region. It holds a single 830 nm laser illumination fibre and seven detector fibres positioned at seven different source-to-detector (S–D) distances (between 0.6 and 2.6 cm) to simultaneously capture blood flow dynamics across layers of the scalp, skull and brain. Fibres with shorter S–D distances acquire shallower optical data from the scalp, while those with greater distances obtain deeper and broader data. The seven channels are synchronized to exhibit identical oscillation frequencies corresponding to the heart rate and cardiac cycle.

    When the SCOS system directs the laser light onto a sample, multiple random scattering events occur before the light exits the sample, creating speckles. These speckles, which materialize on rapid timescales, are the result of interference of light travelling along different trajectories. Movement within the sample (of red blood cells, for instance) causes dynamic changes in the speckle field. These changes are captured by a multi-million-pixel camera with a frame rate above 30 frames/s and quantified by calculating the speckle contrast value for each image.

    Human testing

    The researchers used the SCOS system to perform CBF and CBV measurements in 20 healthy volunteers. To isolate and obtain surface blood dynamics from brain signals, the researchers gently pressed on the superficial temporal artery (a terminal branch of the external carotid artery that supplies blood to the face and scalp) to block blood flow to the scalp.

    In tests on the volunteers, when temporal artery blood flow was occluded for 8 s, scalp-sensitive channels exhibited significant decreases in blood flow while brain-sensitive channels showed minimal change, enabling signals from the internal carotid artery that supplies blood to the brain to be clearly distinguished. Additionally, the team found that positioning the detector 2.3 cm or more away from the source allowed for optimal brain blood flow measurement while minimizing interference from the scalp.

    “Combined with the simultaneous measurements at seven S–D separations, this approach enables the first quantitative experimental assessment of how scalp and brain signal contributions vary with depth in SCOS-based CBF measurements and, more broadly, in optical measurements,” they write. “This work also provides crucial insights into the optimal device S–D distance configuration for preferentially probing brain signal over scalp signal, with a practical and subject-friendly alternative for evaluating depth sensitivity, and complements more advanced, hardware-intensive strategies such as time-domain gating.”

    The researchers are now working to improve the signal-to-noise ratio of the system. They plan to introduce a compact, portable laser and develop a custom-designed extended camera that spans over 3 cm in one dimension, enabling simultaneous and continuous measurement of blood dynamics across S–D distances from 0.5 to 3.5 cm. These design advancements will enhance spatial resolution and enable deeper brain measurements.

    “This crucial step will help transition the system into a compact, wearable form suitable for clinical use,” comments Liu. “Importantly, the measurements described in this publication were achieved in human subjects in a very similar manner to how the final device will be used, greatly reducing barriers to clinical application.”

    “I believe this study will advance the engineering of SCOS systems and bring us closer to a wearable, clinically practical device for monitoring brain blood flow,” adds co-author Simon Mahler, now at Stevens Institute of Technology. “I am particularly excited about the next stage of this project: developing a wearable SCOS system that can simultaneously measure both scalp and brain blood flow, which will unlock many fascinating new experiments.”

    The post Non-invasive wearable device measures blood flow to the brain appeared first on Physics World.

    https://physicsworld.com/a/non-invasive-wearable-device-measures-blood-flow-to-the-brain/
    No Author

    The future of quantum physics and technology debated at the Royal Institution

    Matin Durrani looks back at a week-long series of events in the UK to mark Quantum Year

    The post The future of quantum physics and technology debated at the Royal Institution appeared first on Physics World.

    As we enter the final stretch of the International Year of Quantum Science and Technology (IYQ), I hope you’ve enjoyed our extensive quantum coverage over the last 12 months. We’ve tackled the history of the subject, explored some of the unexplained mysteries that still make quantum physics so exciting, and examined many of the commercial applications of quantum technology. You can find most of our coverage collected into two free-to-read digital Quantum Briefings, available here and here on the Physics World website.

    Over the last 100 years since Werner Heisenberg first developed quantum mechanics on the island of Helgoland in June 1925, quantum mechanics has proved to be an incredibly powerful, successful and logically consistent theory. Our understanding of the subatomic world is no longer the “lamentable hodgepodge of hypotheses, principles, theorems and computational recipes”, as the Israeli physicist and philosopher Max Jammer memorably once described it.

    In fact, quantum mechanics has not just transformed our understanding of the natural world; it has immense practical ramifications too, with so-called “quantum 1.0” technologies – lasers, semiconductors and electronics – underpinning our modern world. But as was clear from the UK National Quantum Technologies Showcase in London last week, organized by Innovate UK, the “quantum 2.0” revolution is now in full swing.

    The day-long event, which is now in its 10th year, featured over 100 exhibitors, including many companies that are already using fundamental quantum concepts such as entanglement and superposition to support the burgeoning fields of quantum computing, quantum sensing and quantum communication. The show was attended by more than 3000 delegates, some of whom almost had to be ushered out of the door at closing time, so keen were they to keep talking.

    Last week also saw a two-day conference at the historic Royal Institution (RI) in central London that was a centrepiece of IYQ in the UK and Ireland. Entitled Quantum Science and Technology: the First 100 Years; Our Quantum Future and attended by over 300 people, it was organized by the History of Physics and the Business Innovation and Growth groups of the Institute of Physics (IOP), which publishes Physics World.

    The first day, focusing on the foundations of quantum mechanics, ended with a panel discussion – chaired by my colleague Tushna Commissariat and Daisy Shearer from the UK’s National Quantum Computing Centre – with physicists Fay Dowker (Imperial College), Jim Al-Khalili  (University of Surrey) and Peter Knight. They talked about whether the quantum wavefunction provides a complete description of physical reality, prompting much discussion with the audience. As Al-Khalili wryly noted, if entanglement has emerged as the fundamental feature of quantum reality, then “decoherence is her annoying and ever-present little brother”.

    Knight, meanwhile, who is a powerful figure in quantum-policy circles, went as far as to say that the limit of decoherence – and indeed the boundary between the classical and quantum worlds – is not a fixed and yet-to-be revealed point. Instead, he mused, it will be determined by how much money and ingenuity and time physicists have at their disposal.

    On the second day of the IOP conference at the RI, I chaired a discussion that brought together four future leaders of the subject: Mehul Malik (Heriot-Watt University) and Sarah Malik (University College London) along with industry insiders Nicole Gillett (Riverlane) and Muhammad Hamza Waseem (Quantinuum).

    As well as outlining the technical challenges in their fields, the speakers all stressed the importance of developing a “skills pipeline” so that the quantum sector has enough talented people to meet its needs. Also vital will be the need to communicate the mysteries and potential of quantum technology – not just to the public but to industrialists, government officials and venture capitalists. By many measures, the UK is at the forefront of quantum tech – and it is a lead it should not let slip.

    Clear talker Jim Al-Khalili giving his Friday night discourse at the Royal Institution on 7 November 2025. (Courtesy: Matin Durrani)

    The week ended with Al-Khalili giving a public lecture, also at the Royal Institution, entitled “A new quantum world: ‘spooky’ physics to tech revolution”. It formed part of the RI’s famous Friday night “discourses”, which this year celebrate their 200th anniversary. Al-Khalili, who also presents A Life Scientific on BBC Radio 4, is now the only person ever to have given three RI discourses.

    After the lecture, which was sold out, he took part in a panel discussion with Knight and Elizabeth Cunningham, a former vice-president for membership at the IOP. Al-Khalili was later presented with a special bottle of “Glentanglement” whisky made by Glasgow-based Fraunhofer UK for the Scottish Quantum Technology cluster.

    The post The future of quantum physics and technology debated at the Royal Institution appeared first on Physics World.

    https://physicsworld.com/a/the-future-of-quantum-physics-and-technology-debated-at-the-royal-institution/
    Matin Durrani

    Neural networks discover unstable singularities in fluid systems

    Result boosts our understanding of the Navier-Stokes equation   

    The post Neural networks discover unstable singularities in fluid systems appeared first on Physics World.

    Significant progress towards answering one of the Clay Mathematics Institute’s seven Millennium Prize Problems has been achieved using deep learning. The challenge is to establish whether or not the Navier-Stokes equation of fluid dynamics develops singularities. The work was done by researchers in the US and UK – including some at Google Deepmind. Some team members had already shown that simplified versions of the equation could develop stable singularities, which reliably form. In the new work, the researchers found unstable singularities, which form only under very specific conditions.

    The Navier–Stokes partial differential equation was developed in the early 19th century by Claude-Louis Navier and George Stokes. It has proved its worth for modelling incompressible fluids in scenarios including water flow in pipes; airflow around aeroplanes; blood moving in veins; and magnetohydrodynamics in plasmas.

    No-one has yet proved, however, whether smooth, non-singular solutions to the equation always exist in three dimensions. “In the real world, there is no singularity…there is no energy going to infinity,” says fluid dynamics expert Pedram Hassanzadeh of the University of Chicago. “So if you have an equation that has a singularity, it tells you that there is some physics that is missing.” In 2000, the Clay Mathematics Institute in Denver, Colorado listed this proof as one of seven key unsolved problems in mathematics, offering a reward of $1,000,000 for an answer.

    Computational approaches

    Researchers have traditionally tackled the problem analytically, but in recent decades high-level computational simulations have been used to assist in the search. In a 2023 paper, mathematician Tristan Buckmaster of New York University and colleagues used a special type of machine learning algorithm called a physics-informed neural network to address the question.

    “The main difference is…you represent [the solution] in a highly non-linear way in terms of a neural network,” explains Buckmaster. This allows it to occupy a lower-dimensional space with fewer free parameters, and therefore to be optimized more efficiently. Using this approach, the researchers successfully obtained the first stable singularity in the Euler equation. This is an analogy to the Navier-Stokes equation that does not include viscosity.

    A stable singularity will still occur if the initial conditions of the fluid are changed slightly – although the time taken for them to form may be altered. An unstable singularity, however, may never occur if the initial conditions are perturbed even infinitesimally. Some researchers have hypothesized that any singularities in the Navier-Stokes equation must be unstable, but finding unstable singularities in a computer model is extraordinarily difficult.

    “Before our result there hadn’t been an unstable singularity for an incompressible fluid equation found numerically,” says geophysicist Ching-Yao Lai of California’s Stanford University.

    Physics-informed neural network

    In the new work the authors of the original paper and others teamed up with researchers at Google Deepmind to search for unstable singularities in a bounded 3D version of the Euler equation using a physics-informed neural network. “Unlike conventional neural networks that learn from vast datasets, we trained our models to match equations that model the laws of physics,” writes Yongji Wang of New York University and Stanford on Deepmind’s blog. “The network’s output is constantly checked against what the physical equations expect, and it learns by minimizing its ‘residual’, the amount by which its solution fails to satisfy the equations.”

    After an exhaustive search at a precision that is orders of magnitude higher than a normal deep learning protocol, the researchers discovered new families of singularities in the 3D Euler equation. They also found singularities in the related incompressible porous media equation used to model fluid flows in soil or rock; and in the Boussinesq equation that models atmospheric flows.

    The researchers also gleaned insights into the strength of the singularities. This could be important as stronger singularities might be less readily smoothed out by viscosity when moving from the Euler equation to the Navier-Stokes equation. The researchers are now seeking to model more open systems to study the problem in a more realistic space.

    Hassanzadeh, who was not involved in the work, believes that it is significant – although the results are not unexpected. “If the Euler equation tells you that ‘Hey, there is a singularity,’ it just tells you that there is physics that is missing and that physics becomes very important around that singularity,” he explains. “In the case of Euler we know that you get the singularity because, at the very smallest scales, the effects of viscosity become important…Finding a singularity in the Euler equation is a big achievement, but it doesn’t answer the big question of whether Navier-Stokes is a representation of the real world, because for us Navier-Stokes represents everything.”

    He says the extension to studying the full Navier-Stokes equation will be challenging but that “they are working with the best AI people in the world at Deepmind,” and concludes “I’m sure it’s something they’re thinking about”.

    The work is available on the arXiv pre-print server.

    The post Neural networks discover unstable singularities in fluid systems appeared first on Physics World.

    https://physicsworld.com/a/neural-networks-discover-unstable-singularities-in-fluid-systems/
    No Author

    NASA’s Goddard Space Flight Center hit by significant downsizing

    The centre has closed a third of its buildings over the past few months

    The post NASA’s Goddard Space Flight Center hit by significant downsizing appeared first on Physics World.

    NASA’s Goddard Space Flight Center (GSFC) looks set to lose a big proportion of its budget as a two-decade reorganization plan for the centre is being accelerated. The move, which is set to be complete by March, has left the Goddard campus with empty buildings and disillusioned employees. Some staff even fear that the actions during the 43-day US government shutdown, which ended on 12 November, could see the end of much of the centre’s activities.

    Based in Greenbelt, Maryland, the GSFC has almost 10 000 scientists and engineers, about 7000 of whom are directly employed by NASA contractors. Responsible for many of NASA’s most important uncrewed missions, telescopes, and probes, the centre is currently working on the Nancy Grace Roman Space Telescope, which is scheduled to launch in 2027, as well as the Dragonfly mission that is due to head for Saturn’s largest moon Titan in 2028.

    The ability to meet those schedules has now been put in doubt by the Trump administration’s proposed budget for financial year 2026, which started in September. It calls for NASA to receive almost $19bn – far less than the $25bn it has received for the past two years. If passed, Goddard would lose more than 42% of its staff.

    Congress, which passes the final budget, is not planning to cut NASA so deeply as it prepares its 2026 budget proposal. But on 24 September, Goddard managers began what they told employees was “a series of moves…that will reduce our footprint into fewer buildings”. The shift is intended to “bring down overall operating costs while maintaining the critical facilities we need for our core capabilities of the future”.

    While this is part of a 20-year “master plan” for the GSFC that NASA’s leadership approved in 2019, the management’s memo stated that “all planned moves will take place over the next several months and be completed by March 2026″. A report in September by Democratic members of the Senate Committee on Commerce, Science, and Transportation, which is responsible for NASA, asserts that the cuts are “in clear violation of the [US] constitution [without] regard for the impacts on NASA’s science missions and workforce”.

    On 3 November, the Goddard Engineers, Scientists and Technicians Association, a union representing NASA workers, reported that the GSFC had already closed over a third of its buildings, including some 100 labs. This had been done, it says, “with extreme haste and with no transparent strategy or benefit to NASA or the nation”. The union adds that the “closures are being justified as cost-saving but no details are being provided and any short-term savings are unlikely to offset a full account of moving costs and the reduced ability to complete NASA missions”.

    Accounting for the damage

    Zoe Lofgren, the lead Democrat on the House of Representatives Science Committee, has demanded of Sean Duffy, NASA’s acting administrator, that the agency “must now halt” any laboratory, facility and building closure and relocation activities at Goddard. In a  letter to Duffy dated 10 November, she also calls for the “relocation, disposal, excessing, or repurposing of any specialized equipment or mission-related activities, hardware and systems” to also end immediately.

    Lofgren now wants NASA to carry out a “full accounting of the damage inflicted on Goddard thus far” by 18 November. Owing to the government shutdown, no GSFC or NASA official was available to respond to Physics World’s requests for a response.

    Meanwhile, the Trump administration has renominated billionaire entrepreneur Jared Isaacman as NASA’s administrator. Trump had originally nominated Isaacman, who had flown on a private SpaceX mission and carried out spacewalk, on the recommendation of SpaceX founder Elon Musk. But the administration withdrew the nomination in May following concerns among some Republicans that Isaacman had funded the Democrat party.

    The post NASA’s Goddard Space Flight Center hit by significant downsizing appeared first on Physics World.

    https://physicsworld.com/a/nasas-goddard-space-flight-center-hit-by-significant-downsizing/
    No Author

    Designing better semiconductor chips: NP hard problems and forever chemicals

    We report from the Heidelberg Laureate Forum

    The post Designing better semiconductor chips: NP hard problems and forever chemicals appeared first on Physics World.

    Like any major endeavour, designing and fabricating semiconductor chips requires compromise. As well as trade-offs between cost and performance, designers also consider carbon emissions and other environmental impacts.

    In this episode of the Physics World Weekly podcast, Margaret Harris reports from the Heidelberg Laureate Forum where she spoke to two researchers who are focused on some of these design challenges.

    Up first is Mariam Elgamal, who’s doing a PhD at Harvard University on the development of environmentally sustainable computing systems. She explains why sustainability goes well beyond energy efficiency and must consider the manufacturing process and the chemicals used therein.

    Harris also chats with Andrew Gunter, who is doing a PhD at the University of British Columbia on circuit design for computer chips. He talks about the maths-related problems that must be solved in order to translate a desired functionality into a chip that can be fabricated.

     

    The post Designing better semiconductor chips: NP hard problems and forever chemicals appeared first on Physics World.

    https://physicsworld.com/a/designing-better-semiconductor-chips-np-hard-problems-and-forever-chemicals/
    Margaret Harris

    High-resolution PET scanner visualizes mouse brain structures with unprecedented detail

    A PET scanner with optimized depth-of-interaction detectors achieves a record sub-0.5 mm spatial resolution

    The post High-resolution PET scanner visualizes mouse brain structures with unprecedented detail appeared first on Physics World.

    Positron emission tomography (PET) is used extensively within preclinical research, enabling molecular imaging of rodent brains, for example, to investigate neurodegenerative disease. Such imaging studies require the highest possible spatial resolution to resolve the tiny structures in the animal’s brain. A research team at the National Institutes for Quantum Science and Technology (QST) in Japan has now developed the first PET scanner to achieve sub-0.5 mm spatial resolution.

    Submillimetre-resolution PET has been demonstrated by several research groups. Indeed, the QST team previously built a PET scanner with 0.55 mm resolution – sufficient to visualize the thalamus and hypothalamus in the mouse brain. But identification of smaller structures such as the amygdala and cerebellar nuclei has remained a challenge.

    “Sub-0.5 mm resolution is important to visualize mouse brain structures with high quantification accuracy,” explains first author Han Gyu Kang. “Moreover, this research work will change our perspective about the fundamental limit of PET resolution, which had been regarded to be around 0.5 mm due to the positron range of [the radioisotope] fluorine-18”.

    System optimization

    With Monte Carlo simulations revealing that sub-0.5 mm resolution could be achievable with optimal detector parameters and system geometry, Kang and colleagues performed a series of modifications to their submillimetre-resolution PET (SR-PET) to create the new high-resolution PET (HR-PET) scanner.

    The HR-PET, described in IEEE Transactions on Medical Imaging, is based around two 48 mm-diameter detector rings with an axial coverage of 23.4 mm. Each ring contains 16 depth-of-interaction (DOI) detectors (essential to minimize parallax error in a small ring diameter) made from three layers of LYSO crystal arrays stacked in a staggered configuration, with the outer layer coupled to a silicon photomultiplier (SiPM) array.

    Compared with their previous design, the researchers reduced the detector ring diameter from 52.5 to 48 mm, which served to improve geometrical efficiency and minimize the noncollinearity effect. They also reduced the crystal pitch from 1.0 to 0.8 mm and the SiPM pitch from 3.2 to 2.4 mm, improving the spatial resolution and crystal decoding accuracy, respectively.

    Other changes included optimizing the crystal thicknesses to 3, 3 and 5 mm for the first, second and third arrays, as well as use of a narrow energy window (440–560 keV) to reduce the scatter fraction and inter-crystal scattering events. “The optimized staggered three-layer crystal array design is also a key factor to enhance the spatial resolution by improving the spatial sampling accuracy and DOI resolution compared with the previous SR-PET,” Kang points out.

    Performance tests showed that the HR-PET scanner had a system-level energy resolution of 18.6% and a coincidence timing resolution of 8.5 ns. Imaging a NEMA 22Na point source revealed a peak sensitivity at the axial centre of 0.65% for the 440–560 keV energy window and a radial resolution of 0.67±0.06 mm from the centre to 10 mm radial offset (using 2D filtered-back-projection reconstruction) – a 33% improvement over that achieved by the SR-PET.

    To further evaluate the performance of the HR-PET, the researchers imaged a rod-based resolution phantom. Images reconstructed using a 3D ordered-subset-expectation-maximization (OSEM) algorithm clearly resolved all of the rods. This included the smallest rods with diameters of 0.5 and 0.45 mm, with average valley-to-peak ratios of 0.533 and 0.655, respectively – a 40% improvement over the SR-PET.

    In vivo brain PET

    The researchers then used the HR-PET for in vivo mouse brain imaging. They injected 18F-FITM, a tracer used to image the central nervous system, into an awake mouse and performed a 30 min PET scan (with the animal anesthetized) 42 min after injection. For comparison, they scanned the same mouse for 30 min with a preclinical Inveon PET scanner.

    Mouse brain PET image
    Imaging the mouse brain 3D maximum intensity projection image obtained from a 30-min HR-PET scan using 18F-FITM. High tracer uptake is seen in the cerebellum, thalamus and hypothalamus. Scale bar: 10 mm. (Courtesy: Han Gyu Kang)

    After OSEM reconstruction, strong tracer uptake in the thalamus, hypothalamus, cerebellar cortex and cerebellar nuclei was clearly visible in the coronal HR-PET images. A zoomed image distinguished the cerebellar nuclei and flocculus, while sagittal and axial images visualized the cortex and striatum. Images from the Inveon, however, could barely resolve these brain structures.

    The team also imaged the animal’s glucose metabolism using the tracer 18F-FDG. A 30 min HR-PET scan clearly delineated glucose transporter expression in the cortex, thalamus, hypothalamus and cerebellar nuclei. Here again, the Inveon could hardly identify these small structures.

    The researchers note that the 18F-FITM and 18F-FDG PET images matched well with the anatomy seen in a preclinical CT scan. “To the best of our knowledge, this is the first separate identification of the hypothalamus, amygdala and cerebellar nuclei of mouse brain,” they write.

    Future plans for the HR-PET scanner, says Kang, include using it for research on neurodegenerative disorders, with tracers that bind to amyloid beta or tau protein. “In addition, we plan to extend the axial coverage over 50 mm to explore the whole body of mice with sub-0.5 mm resolution, especially for oncological research,” he says. “Finally, we would like to achieve sub-0.3 mm PET resolution with more optimized PET detector and system designs.”

    The post High-resolution PET scanner visualizes mouse brain structures with unprecedented detail appeared first on Physics World.

    https://physicsworld.com/a/high-resolution-pet-scanner-visualizes-mouse-brain-structures-with-unprecedented-detail/
    Tami Freeman

    New experiments on static electricity cast doubt on previous studies in the field

    Bulk conductivity may have been hiding the dynamics of surface charge transfer, say researchers

    The post New experiments on static electricity cast doubt on previous studies in the field appeared first on Physics World.

    Static electricity is an everyday phenomenon, but it remains poorly understood. Researchers at the Institute of Science and Technology Austria (ISTA) have now shed new light on it by capturing an “image” of charge distributions as charge transfers from one surface to another. Their conclusions challenge longstanding interpretations of previous experiments and enhance our understanding of how charge behaves on insulating surfaces.

    Static electricity is also known as contact electrification because it occurs when charge is transferred from one object to another by touch. The most common laboratory example involves rubbing a balloon on someone’s head to make their hair stand on end. However, static electricity is also associated with many other activities, including coffee grinding, pollen transport and perhaps even the formation of rocky planets.

    One of the most useful ways of studying contact electrification is to move a metal tip slowly over the surface of a sample without touching it, recording a voltage all the while. These so-called scanning Kelvin methods produce an “image” of voltages created by the transferred charge. At the macroscale, around 100 μm to 10 cm, the main method is termed scanning Kelvin probe microscopy (SKPM). At the nanoscale, around 10  nm to 100  μm, a related but distinct variant known as Kelvin probe force microscopy (KPFM) is used instead.

    In previous fundamental physics studies using these techniques, the main challenges have been to make sense of the stationary patterns of charge left behind after contact electrification, and to investigate how these patterns evolve over space and time. In the latest work, the ISTA team chose to ask a slightly different question: when are the dynamics of charge transfer too fast for measured stationary patterns to yield meaningful information?

    Mapping the charge on the contact-electrified surface of a polymer film

    To find out, ISTA PhD student Felix Pertl built a special setup that could measure a sample’s surface charge with KPFM; transfer it below a linear actuator so that it could exchange charge when it contacted another material; and then transfer it underneath the KPFM again to image the resulting change in the surface charge.

    “In a typical set-up, the sample transfer, moving the AFM to the right place and reinitiation and recalibration of the KPFM parameters can easily take as long as tens of minutes,” Pertl explains. “In our system, this happens in as little as around 30 s. As all aspects of the system are completely automated, we can repeat this process, and quickly, many times.”

    An experimental set-up to measure static electricity
    Whole setup side view of the experiment: the counter-sample (white rod with green sample holder and PDMS at the very end) approaches the sample and induces electric charge via contact. The AFM head is on the left waiting until the sample returns to its original position. (Courtesy: Felix Pertl)

    This speed-up is important because static electricity dissipates relatively rapidly. In fact, the researchers found that the transferred charge disappeared from the sample’s surface quicker than the time required for most KPFM scans. Their data also revealed that the deposited charge was, in effect, uniformly distributed across the surface and that its dissipation depended on the material’s electrical conductivity. Additional mathematical modelling and subsequent experiments confirmed that the more insulating a material is, the slower it dissipates charge.

    Surface heterogeneity likely not a feature of static electricity

    Pertl says that these results call into question the validity of some previous static electricity studies that used KPFM to study charge transfer. “The most influential paper in our field to date reported surface charge heterogeneity using KPFM,” he tells Physics World. At first, the ISTA team’s goal was to understand the origin of this heterogeneity. But when their own experiments showed an essentially homogenous distribution of surface charge, the researchers had to change tack.

    “The biggest challenge in our work was realizing – and then accepting – that we could not reproduce the results from this previous study,” Pertl says. “Convincing both my principal investigator and myself that our data revealed a very different physical mechanism required patience, persistence and trust in our experimental approach.”

    The discrepancy, he adds, implies that the surface heterogeneity previously observed was likely not a feature of static electricity, as was claimed. Instead, he says, it was probably “an artefact of the inability to image the charge before it had left the sample surface”.

    A historical precedent

    Studies of contact electrification studies go back a long way. Philippe Molinié of France’s GeePs Laboratory, who was not involved in this work, notes that the first experiments were performed by the English scientist William Gilbert clear back in the sixteenth century. As well as coining the term “electricity” (from the Greek “elektra”, meaning amber), Gilbert was also the first to establish that magnets maintain their electrical attraction over time, while the forces produced by contact-charged insulators slowly decrease.

    “Four centuries later, many mysteries remain unsolved in the contact electrification phenomenon,” Molinié observes. He adds that the surfaces of insulating materials are highly complex and usually strongly disordered, which affects their ability to transfer charge at the molecular scale. “The dynamics of the charge neutralization, as Pertl and colleagues underline, is also part of the process and is much more complex than could be described by a simple resistance-capacitor model,” Molinié says.

    Although the ISTA team studied these phenomena with sophisticated Kelvin probe microscopy rather than the rudimentary tools available to Gilbert, it is, Molinié says, “striking that the competition between charge transfer and charge screening that comes from the conductivity of an insulator, first observed by Gilbert, is still at the very heart of the scientific interrogations that this interesting new work addresses.”

    “A more critical interpretation”

    The Austrian researchers, who detail their work in Phys. Rev. Lett., say they hope their experiments will “encourage a more critical interpretation” of KPFM data in the future, with a new focus on the role of sample grounding and bulk conductivity in shaping observed charge patterns. “We hope it inspires KPFM users to reconsider how they design and analyse experiments, which could lead to more accurate insights into charge behaviour in insulators,” Pertl says.

    “We are now planning to deliberately engineer surface charge heterogeneity into our samples,” he reveals. “By tuning specific surface properties, we aim to control the sign and spatial distribution of charge on defined regions of these.”

    The post New experiments on static electricity cast doubt on previous studies in the field appeared first on Physics World.

    https://physicsworld.com/a/new-experiments-on-static-electricity-cast-doubt-on-previous-studies-in-the-field/
    Isabelle Dumé

    SEMICON Europa 2025 presents cutting-edge technology for semiconductor R&D and production

    Europe’s largest event for electronics manufacturing comes to Munich on 18−21 November, 2025

    The post SEMICON Europa 2025 presents cutting-edge technology for semiconductor R&D and production appeared first on Physics World.

    “Global collaborations for European economic resilience” is the theme of  SEMICON Europa 2025. The event is coming to Munich, Germany on 18–21 November and it will attract 25,000 semiconductor professionals who will enjoy presentations from over 200 speakers.

    The TechARENA portion of the event will cover a wide range of technology-related issues including new materials, future computing paradigms and the development of hi-tech skills in the European workface. There will also be an Executive Forum, which will feature leaders in industry and government and will cover topics including silicon geopolitics and the use of artificial intelligence in semiconductor manufacturing.

    SEMICON Europa will be held at the Messe München, where it will feature a huge exhibition with over 500 exhibitors from around the world. The exhibition is spread out over three halls and here are some of the companies and product innovations to look out for on the show floor.

    Accelerating the future of electro-photonic integration with SmarAct

    As the boundaries between electronic and photonic technologies continue to blur, the semiconductor industry faces a growing challenge: how to test and align increasingly complex electro-photonic chip architectures efficiently, precisely, and at scale. At SEMICON Europa 2025, SmarAct will address this challenge head-on with its latest innovation – Fast Scan Align. This is a high-speed and high-precision alignment solution that redefines the limits of testing and packaging for integrated photonics.

    Fast Scan Align
    Fast Scan Align SmarAct’s high-speed and high-precision alignment solution redefines the limits of testing and packaging for integrated photonics. (Courtesy: SmarAct)

    In the emerging era of heterogeneous integration, electronic and photonic components must be aligned and interconnected with sub-micrometre accuracy. Traditional positioning systems often struggle to deliver both speed and precision, especially when dealing with the delicate coupling between optical and electrical domains. SmarAct’s Fast Scan Align solution bridges this gap by combining modular motion platforms, real-time feedback control, and advanced metrology into one integrated system.

    At its core, Fast Scan Align leverages SmarAct’s electromagnetic and piezo-driven positioning stages, which are capable of nanometre-resolution motion in multiple degrees of freedom. Fast Scan Align’s modular architecture allows users to configure systems tailored to their application – from wafer-level testing to fibre-to-chip alignment with active optical coupling. Integrated sensors and intelligent algorithms enable scanning and alignment routines that drastically reduce setup time while improving repeatability and process stability.

    Fast Scan Align’s compact modules allow various measurement techniques to be integrated with unprecedented possibilities. This has become decisive for the increasing level of integration of complex electro-photonic chips.

    Apart from the topics of wafer-level testing and packaging, wafer positioning with extreme precision is as crucial as never before for the highly integrated chips of the future. SmarAct’s PICOSCALE interferometer addresses the challenge of extreme position by delivering picometer-level displacement measurements directly at the point of interest.

    When combined with SmarAct’s precision wafer stages, the PICOSCALE interferometer ensures highly accurate motion tracking and closed-loop control during dynamic alignment processes. This synergy between motion and metrology gives users unprecedented insight into the mechanical and optical behaviour of their devices – which is a critical advantage for high-yield testing of photonic and optoelectronic wafers.

    Visitors to SEMICON Europa will also experience how all of SmarAct’s products – from motion and metrology components to modular systems and up to turn-key solutions – integrate seamlessly, offering intuitive operation, full automation capability, and compatibility with laboratory and production environments alike.

    For more information visit SmarAct at booth B1.860 or explore more of SmarAct’s solutions in the semiconductor and photonics industry.

    Optimized pressure monitoring: Efficient workflows with Thyracont’s VD800 digital compact vacuum meters

    Thyracont Vacuum Instruments will be showcasing its precision vacuum metrology systems in exhibition hall C1. Made in Germany, the company’s broad portfolio combines diverse measurement technologies – including piezo, Pirani, capacitive, cold cathode, and hot cathode – to deliver reliable results across a pressure range from 2000 to 3e-11 mbar.

    VD800 series
    VD800 Thryracont’s series combines high accuracy with a highly intuitive user interface, defining the next generation of compact vacuum meters. (Courtesy: Thyracont)

    Front-and-centre at SEMICON Europa will be Thyracont’s new series of VD800 compact vacuum meters. These instruments provide precise, on-site pressure monitoring in industrial and research environments. Featuring a direct pressure display and real-time pressure graphs, the VD800 series is ideal for service and maintenance tasks, laboratory applications, and test setups.

    The VD800 series combines high accuracy with a highly intuitive user interface. This delivers real-time measurement values; pressure diagrams; and minimum and maximum pressure – all at a glance. The VD800’s 4+1 membrane keypad ensures quick access to all functions. USB-C and optional Bluetooth LE connectivity deliver seamless data readout and export. The VD800’s large internal data logger can store over 10 million measured values with their RTC data, with each measurement series saved as a separate file.

    Data sampling rates can be set from 20 ms to 60 s to achieve dynamic pressure tracking or long-term measurements. Leak rates can be measured directly by monitoring the rise in pressure in the vacuum system. Intelligent energy management gives the meters extended battery life and longer operation times. Battery charging is done conveniently via USB-C.

    The vacuum meters are available in several different sensor configurations, making them adaptable to a wide range of different uses. Model VD810 integrates a piezo ceramic sensor for making gas-type-independent measurements for rough vacuum applications. This sensor is insensitive to contamination, making it suitable for rough industrial environments. The VD810 measures absolute pressure from 2000 to 1 mbar and relative pressure from −1060 to +1200 mbar.

    Model VD850 integrates a piezo/Pirani combination sensor, which delivers high resolution and accuracy in the rough and fine vacuum ranges. Optimized temperature compensation ensures stable measurements in the absolute pressure range from 1200 to 5e-5 mbar and in the relative pressure range from −1060 to +340 mbar.

    The model VD800 is a standalone meter designed for use with Thyracont’s USB-C vacuum transducers, which are available in two models. The VSRUSB USB-C transducer is a piezo/Pirani combination sensor that measures absolute pressure in the 2000 to 5.0e-5 mbar range. The other is the VSCUSB USB-C transducer, which measures absolute pressures from 2000 down to 1 mbar and has a relative pressure range from -1060 to +1200 mbar. A USB-C cable connects the transducer to the VD800 for quick and easy data retrieval. The USB-C transducers are ideal for hard-to-reach areas of vacuum systems. The transducers can be activated while a process is running, enabling continuous monitoring and improved service diagnostics.

    With its blend of precision, flexibility, and ease of use, the Thyracont VD800 series defines the next generation of compact vacuum meters. The devices’ intuitive interface, extensive data capabilities, and modern connectivity make them an indispensable tool for laboratories, service engineers, and industrial operators alike.

    To experience the future of vacuum metrology in Munich, visit Thyracont at SEMICON Europa hall C1, booth 752. There you will discover how the VD800 series can optimize your pressure monitoring workflows.

    The post SEMICON Europa 2025 presents cutting-edge technology for semiconductor R&D and production appeared first on Physics World.

    https://physicsworld.com/a/semicon-europa-2025-presents-cutting-edge-technology-for-semiconductor-rd-and-production/
    Hamish Johnston

    Physicists discuss the future of machine learning and artificial intelligence

    The editors-in-chief of IOP Publishing’s machine learning series share their views

    The post Physicists discuss the future of machine learning and artificial intelligence appeared first on Physics World.

    Pierre Gentine, Jimeng Sun, Jay Lee and Kyle Cranmer
    Looking ahead to the future of machine learning: (clockwise from top left) Jay Lee, Jimeng Sun, Pierre Gentine and Kyle Cranmer.

    IOP Publishing’s Machine Learning series is the world’s first open-access journal series dedicated to the application and development of machine learning (ML) and artificial intelligence (AI) for the sciences.

    Part of the series is Machine Learning: Science and Technology, launched in 2019, which bridges the application and advances in machine learning across the sciences. Machine Learning: Earth is dedicated to the application of ML and AI across all areas of Earth, environmental and climate sciences while Machine Learning: Health covers healthcare, medical, biological, clinical and health sciences and Machine Learning: Engineeringfocuses on applied AI and non-traditional machine learning to the most complex engineering challenges.

    Here, the editors-in-chief (EiC) of the four journals discuss the growing importance of machine learning and their plans for the future.

    Kyle Cranmer is a particle physicist and data scientist at the University of Wisconsin-Madison and is EiC of Machine Learning: Science and Technology (MLST). Pierre Gentine is a geophysicist at Columbia University and is EiC of Machine Learning: Earth. Jimeng Sun is a biophysicist at the University of Illinois at Urbana-Champaign and is EiC of Machine Learning: Health. Mechanical engineer Jay Lee is from the University of Maryland and is EiC of Machine Learning: Engineering.

    What do you attribute to the huge growth over the past decade in research into and using machine learning?

    Kyle Cranmer (KC): It is due to a convergence of multiple factors. The initial success of deep learning was driven largely by benchmark datasets, advances in computing with graphics processing units, and some clever algorithmic tricks. Since then, we’ve seen a huge investment in powerful, easy-to-use tools that have dramatically lowered the barrier to entry and driven extraordinary progress.

    Pierre Gentine (PG): Machine learning has been transforming many fields of physics, as it can accelerate physics simulation, better handle diverse sources of data (multimodality), help us better predict.

    Jimeng Sun (JS): Over the past decade, we have seen machine learning models consistently reach — and in some cases surpass — human-level performance on real-world tasks. This is not just in benchmark datasets, but in areas that directly impact operational efficiency and accuracy, such as medical imaging interpretation, clinical documentation, and speech recognition. Once ML proved it could perform reliably at human levels, many domains recognized its potential to transform labour-intensive processes.

    Jay Lee (JL):  Traditionally, ML growth is based on the development of three elements: algorithms, big data, and computing.  The past decade’s growth in ML research is due to the perfect storm of abundant data, powerful computing, open tools, commercial incentives, and groundbreaking discoveries—all occurring in a highly interconnected global ecosystem.

    What areas of machine learning excite you the most and why?

    KC: The advances in generative AI and self-supervised learning are very exciting. By generative AI, I don’t mean Large Language Models — though those are exciting too — but probabilistic ML models that can be useful in a huge number of scientific applications. The advances in self-supervised learning also allows us to engage our imagination of the potential uses of ML beyond well-understood supervised learning tasks.

    PG: I am very interested in the use of ML for climate simulations and fluid dynamics simulations.

    JS: The emergence of agentic systems in healthcare — AI systems that can reason, plan, and interact with humans to accomplish complex goals. A compelling example is in clinical trial workflow optimization. An agentic AI could help coordinate protocol development, automatically identify eligible patients, monitor recruitment progress, and even suggest adaptive changes to trial design based on interim data. This isn’t about replacing human judgment — it’s about creating intelligent collaborators that amplify expertise, improve efficiency, and ultimately accelerate the path from research to patient benefit.

    JL: One area is  generative and multimodal ML — integrating text, images, video, and more — are transforming human–AI interaction, robotics, and autonomous systems. Equally exciting is applying ML to nontraditional domains like semiconductor fabs, smart grids, and electric vehicles, where complex engineering systems demand new kinds of intelligence.

    What vision do you have for your journal in the coming years?

    KC: The need for a venue to propagate advances in AI/ML in the sciences is clear. The large AI conferences are under stress, and their review system is designed to be a filter not a mechanism to ensure quality, improve clarity and disseminate progress. The large AI conferences also aren’t very welcoming to user-inspired research, often casting that work as purely applied. Similarly, innovation in AI/ML often takes a back seat in physics journals, which slows the propagation of those ideas to other fields. My vision for MLST is to fill this gap and nurture the community that embraces AI/ML research inspired by the physical sciences.

    PG: I hope we can demonstrate that machine learning is more than a nice tool but that it can play a fundamental role in physics and Earth sciences, especially when it comes to better simulating and understanding the world.

    JS: I see Machine Learning: Health becoming the premier venue for rigorous ML–health research — a place where technical novelty and genuine clinical impact go hand in hand. We want to publish work that not only advances algorithms but also demonstrates clear value in improving health outcomes and healthcare delivery. Equally important, we aim to champion open and reproducible science. That means encouraging authors to share code, data, and benchmarks whenever possible, and setting high standards for transparency in methods and reporting. By doing so, we can accelerate the pace of discovery, foster trust in AI systems, and ensure that our field’s breakthroughs are accessible to — and verifiable by — the global community.

    JL:  Machine Learning: Engineering envisions becoming the global platform where ML meets engineering. By fostering collaboration, ensuring rigour and interpretability, and focusing on real-world impact, we aim to redefine how AI addresses humanity’s most complex engineering challenges.

    The post Physicists discuss the future of machine learning and artificial intelligence appeared first on Physics World.

    https://physicsworld.com/a/physicists-discuss-the-future-of-machine-learning-and-artificial-intelligence/
    Michael Banks

    Playing games by the quantum rulebook expends less energy

    New form of quantum advantage emerges from study of game theory and quantum information

    The post Playing games by the quantum rulebook expends less energy appeared first on Physics World.

    Games played under the laws of quantum mechanics dissipate less energy than their classical equivalents. This is the finding of researchers at Singapore’s Nanyang Technological University (NTU), who worked with colleagues in the UK, Austria and the US to apply the mathematics of game theory to quantum information. The researchers also found that for more complex game strategies, the quantum-classical energy difference can increase without bound, raising the possibility of a “quantum advantage” in energy dissipation.

    Game theory is the field of mathematics that aims to formally understand the payoff or gains that a person or other entity (usually called an agent) will get from following a certain strategy. Concepts from game theory are often applied to studies of quantum information, especially when trying to understand whether agents who can use the laws of quantum physics can achieve a better payoff in the game.

    In the latest work, which is published in Physical Review Letters, Jayne Thompson, Mile Gu and colleagues approached the problem from a different direction. Rather than focusing on differences in payoffs, they asked how much energy must be dissipated to achieve identical payoffs for games played under the laws of classical versus quantum physics. In doing so, they were guided by Landauer’s principle, an important concept in thermodynamics and information theory that states that there is a minimum energy cost to erasing a piece of information.

    This Landauer minimum is known to hold for both classical and quantum systems. However, in practice systems will spend more than the minimum energy erasing memory to make space for new information, and this energy will be dissipated as heat. What the NTU team showed is that this extra heat dissipation can be reduced in the quantum system compared to the classical one.

    Planning for future contingencies

    To understand why, consider that when a classical agent creates a strategy, it must plan for all possible future contingencies. This means it stores possibilities that never occur, wasting resources. Thompson explains this with a simple analogy. Suppose you are packing to go on a day out. Because you are not sure what the weather is going to be, you must pack items to cover all possible weather outcomes. If it’s sunny, you’d like sunglasses. If it rains, you’ll need your umbrella. But if you only end up using one of these items, you’ll have wasted space in your bag.

    “It turns out that the same principle applies to information,” explains Thompson. “Depending on future outcomes, some stored information may turn out to be unnecessary – yet an agent must still maintain it to stay ready for any contingency.”

    For a classical system, this can be a very wasteful process. Quantum systems, however, can use superposition to store past information more efficiently. When systems in a quantum superposition are measured, they probabilistically reveal an outcome associated with only one of the states in the superposition. Hence, while superposition can be used to store both pasts, upon measurement all excess information is automatically erased “almost as if they had never stored this information at all,” Thompson explains.

    The upshot is that because information erasure has close ties to energy dissipation, this gives quantum systems an energetic advantage. “This is a fantastic result focusing on the physical aspect that many other approaches neglect,” says Vlatko Vedral, a physicist at the University of Oxford, UK who was not involved in the research.

    Implications of the research

    Gu and Thompson say their result could have implications for the large language models (LLMs) behind popular AI tools such as ChatGPT, as it suggests there might be theoretical advantages, from an energy consumption point of view, in using quantum computers to run them.

    Another, more foundational question they hope to understand regarding LLMs is the inherent asymmetry in their behaviour. “It is likely a lot more difficult for an LLM to write a book from back cover to front cover, as opposed to in the more conventional temporal order,” Thompson notes. When considered from an information-theoretic point of view, the two tasks are equivalent, making this asymmetry somewhat surprising.

    In Thompson and Gu’s view, taking waste into consideration could shed light on this asymmetry. “It is likely we have to waste more information to go in one direction over the other,” Thompson says, “and we have some tools here which could be used to analyse this”.

    For Vedral, the result also has philosophical implications. If quantum agents are more optimal, he says, it is “surely is telling us that the most coherent picture of the universe is the one where the agents are also quantum and not just the underlying processes that they observe”.

    • This article was amended on 19 November 2025 to correct a reference to the minimum energy cost of erasing information. It is the Landauer minimum, not the Landau minimum.

    The post Playing games by the quantum rulebook expends less energy appeared first on Physics World.

    https://physicsworld.com/a/playing-games-by-the-quantum-rulebook-expends-less-energy/
    No Author

    Teaching machines to understand complexity

    This research introduces a novel approach to uncovering structural variables in complex systems, reshaping how we model the unpredictable behaviour of the real world

    The post Teaching machines to understand complexity appeared first on Physics World.

    Complex systems model real-world behaviour that is dynamic and often unpredictable. They are challenging to simulate because of nonlinearity, where small changes in conditions can lead to disproportionately large effects; many interacting variables, which make computational modelling cumbersome; and randomness, where outcomes are probabilistic. Machine learning is a powerful tool for understanding complex systems. It can be used to find hidden relationships in high-dimensional data and predict the future state of a system based on previous data.

    This research develops a novel machine learning approach for complex systems that allows the user to extract a few collective descriptors of the system, referred to as inherent structural variables. The researchers used an autoencoder (a type of machine learning tool) to examine snapshots of how atoms are arranged in a system at any moment (called instantaneous atomic configurations). Each snapshot is then matched to a more stable version of that structure (an inherent structure), which represents the system’s underlying shape or pattern after thermal noise is removed. These inherent structural variables enable the analysis of structural transitions both in and out of equilibrium and the computation of high-resolution free-energy landscapes. These are detailed maps that show how a system’s energy changes as its structure or configuration changes, helping researchers understand stability, transitions, and dynamics in complex systems.

    The model is versatile, and the authors demonstrate how it can be applied to metal nanoclusters and protein structures. In the case of Au147 nanoclusters (well-organised structures made up of 147 gold atoms), the inherent structural variables reveal three main types of stable structures that the gold nanocluster can adopt: fcc (face-centred cubic), Dh (decahedral), and Ih (icosahedral). These structures represent different stable states that a nanocluster can switch between, and on the high-resolution free-energy landscape, they appear as valleys. Moving from one valley to another isn’t easy, there are narrow paths or barriers between them, known as kinetic bottlenecks.

    The researchers validated their machine learning model using Markov state models, which are mathematical tools that help analyse how a system moves between different states over time, and electron microscopy, which images atomic structures and can confirm that the predicted structures exist in the gold nanoclusters. The approach also captures non-equilibrium melting and freezing processes, offering insights into polymorph selection and metastable states. Scalability is demonstrated up to Au309 clusters.

    The generality of the method is further demonstrated by applying it to the bradykinin peptide, a completely different type of system, identifying distinct structural motifs and transitions. Applying the method to a biological molecule provides further evidence that the machine learning approach is a flexible, powerful technique for studying many kinds of complex systems. This work contributes to machine learning strategies, as well as experimental and theoretical studies of complex systems, with potential applications across liquids, glasses, colloids, and biomolecules.

    Read the full article

    Inherent structural descriptors via machine learning

    Emanuele Telari et al 2025 Rep. Prog. Phys. 88 068002

    Do you want to learn more about this topic?

    Complex systems in the spotlight: next steps after the 2021 Nobel Prize in Physics by Ginestra Bianconi et al (2023)

    The post Teaching machines to understand complexity appeared first on Physics World.

    https://physicsworld.com/a/teaching-machines-to-understand-complexity/
    Lorna Brigham

    Using AI to find new particles at the LHC

    The Standard Model of particle physics is a very well-tested theory that describes the fundamental particles and their interactions. However, it does have several key limitations. For example, it doesn’t account for dark matter or why neutrinos have masses. One of the main aims of experimental particle physics at the moment is therefore to search […]

    The post Using AI to find new particles at the LHC appeared first on Physics World.

    The Standard Model of particle physics is a very well-tested theory that describes the fundamental particles and their interactions. However, it does have several key limitations. For example, it doesn’t account for dark matter or why neutrinos have masses.

    One of the main aims of experimental particle physics at the moment is therefore to search for signs of new physical phenomena beyond the Standard Model.

    Finding something new like this would point us towards a better theoretical model of particle physics: one that can explain things that the Standard Model isn’t able to.

    These searches often involve looking for rare or unexpected signals in high-energy particle collisions such as those at CERN’s Large Hadron Collider (LHC).

    In a new paper published by the CMS collaboration, a new analysis method was used to search for new particles produced by proton-proton collisions at the at the LHC.

    These particles would decay into two jets, but with unusual internal structure not typical of known particles like quarks or gluons.

    The researchers used advanced machine learning techniques to identify jets with different substructures, applying various anomaly detection methods to maximise sensitivity to unknown signals.

    Unlike traditional strategies, anomaly detection methods allow the AI models to identify anomalous patterns in the data without being provided specific simulated examples, giving them increased sensitivity to a wider range of potential new particles.

    This time, they didn’t find any significant deviations from expected background values. Although no new particles were found, the results enabled the team to put several new theoretical models to the test for the first time.  They were also able to set upper bounds on the production rates of several hypothetical particles.

    Most importantly, the study demonstrates that machine learning can significantly enhance the sensitivity of searches for new physics, offering a powerful tool for future discoveries at the LHC.

    Read the full article

    Model-agnostic search for dijet resonances with anomalous jet substructure in proton–proton collisions at = 13 TeV – IOPscience

    The CMS Collaboration, 2025 Rep. Prog. Phys. 88 067802

    The post Using AI to find new particles at the LHC appeared first on Physics World.

    https://physicsworld.com/a/using-ai-to-find-new-particles-at-the-lhc/
    Paul Mabey

    Researchers pin down the true cost of precision in quantum clocks

    Trade-off between precision and entropy production lies in measurement process

    The post Researchers pin down the true cost of precision in quantum clocks appeared first on Physics World.

    Classical clocks have to obey the second law of thermodynamics: the higher their precision, the more entropy they produce. For a while, it seemed like quantum clocks might beat this system, at least in theory. This is because although quantum fluctuations produce no entropy, if you can count those fluctuations as clock “ticks”, you can make a clock with nonzero precision. Now, however, a collaboration of researchers across Europe has pinned down where the entropy-precision trade-off balances out: it’s in the measurement process. As project leader Natalia Ares observes, “There’s no such thing as a free lunch.”

    The clock the team used to demonstrate this principle consists of a pair of quantum dots coupled by a thin tunnelling barrier. In this double quantum dot system, a “tick” occurs whenever an electron tunnels from one side of the system to the other, through both dots. Applying a bias voltage gives ticks a preferred direction.

    This might not seem like the most obvious kind of clock. Indeed, as an actual timekeeping device, collaboration member Florian Meier describes it as “quite bad”. However, Ares points out that although the tunnelling process is random (stochastic), the period between ticks does have a mean and a standard deviation. Hence, given enough ticks, the number of ticks recorded will tell you something about how much time has passed.

    In any case, Meier adds, they were not setting out to build the most accurate clock. Instead, they wanted to build a playground to explore basic principles of energy dissipation and clock precision, and for that, it works really well. “The really cool thing I like about what they did was that with that particular setup, you can really pinpoint the entropy dissipation of the measurement somehow in this quantum dot,” says Meier, a PhD student at the Technical University in Vienna, Austria. “I think that’s really unique in the field.”

    Calculating the entropy

    To measure the entropy of each quantum tick, the researchers measured the voltage drop (and associated heat dissipation) for each electron tunnelling through the double quantum dot. Vivek Wadhia, a DPhil student in Ares’s lab at the University of Oxford, UK who performed many of the measurements, points out that this single unit of charge does not equate to very much entropy. However, measuring the entropy of the tunnelling electron was not the full story.

    A quantum clock with Vivek Wadhia
    Timekeeping: Vivek Wadhia working on the clock used in the experiment. (Courtesy: Wadhia et al./APS 2025)

    Because the ticks of the quantum clock were, in effect, continuously monitored by the environment, the coherence time for each quantum tunnelling event was very short. However, because the time on this clock could not be observed directly by humans – unlike, say, the hands of a mechanical clock – the researchers needed another way to measure and record each tick.

    For this, they turned to the electronics they were using in the lab and compared the power in versus the power out on a macroscopic scale. “That’s the cost of our measurement, right?” says Wadhia, adding that this cost includes both the measuring and recording of each tick. He stresses that they were not trying to find the most thermodynamically efficient measurement technique: “The idea was to understand how the readout compares to the clockwork.”

    This classical entropy associated with measuring and recording each tick turns out to be nine orders of magnitude larger than the quantum entropy of a tick – more than enough for the system to operate as a clock with some level of precision. “The interesting thing is that such simple systems sometimes reveal how you can maybe improve precision at a very low cost thermodynamically,” Meier says.

    As a next step, Ares plans to explore different arrangements of quantum dots, using Meier’s previous theoretical work to improve the clock’s precision. “We know that, for example, clocks in nature are not that energy intensive,” Ares tells Physics World. “So clearly, for biology, it is possible to run a lot of processes with stochastic clocks.”

    The research is reported in Physical Review Letters.

    The post Researchers pin down the true cost of precision in quantum clocks appeared first on Physics World.

    https://physicsworld.com/a/researchers-pin-down-the-true-cost-of-precision-in-quantum-clocks/
    Anna Demming

    The forgotten pioneers of computational physics

    Iulia Georgescu highlights the forgotten pioneers of computational physics and calls for a wider appreciation of research software engineers

    The post The forgotten pioneers of computational physics appeared first on Physics World.

    When you look back at the early days of computing, some familiar names pop up, including John von Neumann, Nicholas Metropolis and Richard Feynman. But they were not lonely pioneers – they were part of a much larger group, using mechanical and then electronic computers to do calculations that had never been possible before.

    These people, many of whom were women, were the first scientific programmers and computational scientists. Skilled in the complicated operation of early computing devices, they often had degrees in maths or science, and were an integral part of research efforts. And yet, their fundamental contributions are mostly forgotten.

    This was in part because of their gender – it was an age when sexism was rife, and it was standard for women to be fired from their job after getting married. However, there is another important factor that is often overlooked, even in today’s scientific community – people in technical roles are often underappreciated and underacknowledged, even though they are the ones who make research possible.

    Human and mechanical computers

    Originally, a “computer” was a human being who did calculations by hand or with the help of a mechanical calculator. It is thought that the world’s first computational lab was set up in 1937 at Columbia University. But it wasn’t until the Second World War that the demand for computation really exploded; with the need for artillery calculations, new technologies and code breaking.

    Three women in a basement lab performing calculations by hand
    Human computers The term “computer” originally referred to people who performed calculations by hand. Here, Kay McNulty, Alyse Snyder and Sis Stump operate the differential analyser in the basement of the Moore School of Electrical Engineering, University of Pennsylvania, circa 1942–1945. (Courtesy: US government)

    In the US, the development of the atomic bomb during the Manhattan Project (established in 1943) required huge computational efforts, so it wasn’t long before the New Mexico site had a hand-computing group. Called the T-5 group of the Theoretical Division, it initially consisted of about 20 people. Most were women, including the spouses of other scientific staff. Among them was Mary Frankel, a mathematician married to physicist Stan Frankel; mathematician Augusta “Mici” Teller who was married to Edward Teller, the “father of the hydrogen bomb”; and Jean Bacher, the wife of physicist Robert Bacher.

    As the war continued, the T-5 group expanded to include civilian recruits from the nearby towns and members of the Women’s Army Corps. Its staff worked around the clock, using printed mathematical tables and desk calculators in four-hour shifts – but that was not enough to keep up with the computational needs for bomb development. In the early spring of 1944, IBM punch-card machines were brought in to supplement the human power. They became so effective that the machines were soon being used for all large calculations, 24 hours a day, in three shifts.

    The computational group continued to grow, and among the new recruits were Naomi Livesay and Eleonor Ewing. Livesay held an advanced degree in mathematics and had done a course in operating and programming IBM electric calculating machines, making her an ideal candidate for the T-5 division. She in turn recruited Ewing, a fellow mathematician who was a former colleague. The two young women supervised the running of the IBM machines around the clock.

    The frantic pace of the T-5 group continued until the end of the war in September 1945. The development of the atomic bomb required an immense computational effort, which was made possible through hand and punch-card calculations.

    Electronic computers

    Shortly after the war ended, the first fully electronic, general-purpose computer – the Electronic Numerical Integrator and Computer (ENIAC) – became operational at the University of Pennsylvania, following two years of development. The project had been led by physicist John Mauchly and electrical engineer J Presper Eckert. The machine was operated and coded by six women – mathematicians Betty Jean Jennings (later Bartik); Kathleen, or Kay, McNulty (later Mauchly, then Antonelli); Frances Bilas (Spence); Marlyn Wescoff (Meltzer) and Ruth Lichterman (Teitelbaum); as well as Betty Snyder (Holberton) who had studied journalism.

    Two women adjusting switches on a large room-sized computer
    World first The ENIAC was the first programmable, electronic, general-purpose digital computer. It was built at the US Army’s Ballistic Research Laboratory in 1945, then moved to the University of Pennsylvania in 1946. Its initial team of six coders and operators were all women, including Betty Jean Jennings (later Bartik – left of photo) and Frances Bilas (later Spence – right of photo). They are shown preparing the computer for Demonstration Day in February 1946. (Courtesy: US Army/ ARL Technical Library)

    Polymath John von Neumann also got involved when looking for more computing power for projects at the new Los Alamos Laboratory, established in New Mexico in 1947. In fact, although originally designed to solve ballistic trajectory problems, the first problem to be run on the ENIAC was “the Los Alamos problem” – a thermonuclear feasibility calculation for Teller’s group studying the H-bomb.

    Like in the Manhattan Project, several husband-and-wife teams worked on the ENIAC, the most famous being von Neumann and his wife Klara Dán, and mathematicians Adele and Herman Goldstine. Dán von Neumann in particular worked closely with Nicholas Metropolis, who alongside mathematician Stanislaw Ulam had coined the term Monte Carlo to describe numerical methods based on random sampling. Indeed, between 1948 and 1949 Dán von Neumann and Metropolis ran the first series of Monte Carlo simulations on an electronic computer.

    Work began on a new machine at Los Alamos in 1948 – the Mathematical Analyzer Numerical Integrator and Automatic Computer (MANIAC) – which ran its first large-scale hydrodynamic calculation in March 1952. Many of its users were physicists, and its operators and coders included mathematicians Mary Tsingou (later Tsingou-Menzel), Marjorie Jones (Devaney) and Elaine Felix (Alei); plus Verna Ellingson (later Gardiner) and Lois Cook (Leurgans).

    Early algorithms

    The Los Alamos scientists tried all sorts of problems on the MANIAC, including a chess-playing program – the first documented case of a machine defeating a human at the game. However, two of these projects stand out because they had profound implications on computational science.

    In 1953 the Tellers, together with Metropolis and physicists Arianna and Marshall Rosenbluth, published the seminal article “Equation of state calculations by fast computing machines” (J. Chem. Phys. 21 1087). The work introduced the ideas behind the “Metropolis (later renamed Metropolis–Hastings) algorithm”, which is a Monte Carlo method that is based on the concept of “importance sampling”. (While Metropolis was involved in the development of Monte Carlo methods, it appears that he did not contribute directly to the article, but provided access to the MANIAC nightshift.) This is the progenitor of the Markov Chain Monte Carlo methods, which are widely used today throughout science and engineering.

    Marshall later recalled how the research came about when he and Arianna had proposed using the MANIAC to study how solids melt (AIP Conf. Proc. 690 22).

    Black and white photo of two men looking at a chess board on a table in front of large rack of computer switches
    A mind for chess Paul Stein (left) and Nicholas Metropolis play “Los Alamos” chess against the MANIAC. “Los Alamos” chess was a simplified version of the game, with the bishops removed to reduce the MANIAC’s processing time between moves. The computer still needed about 20 minutes between moves. The MANIAC became the first computer to beat a human opponent at chess in 1956. (Courtesy: US government / Los Alamos National Laboratory)

    Edward Teller meanwhile had the idea of using statistical mechanics and taking ensemble averages instead of following detailed kinematics for each individual disk, and Mici helped with programming during the initial stages. However, the Rosenbluths did most of the work, with Arianna translating and programming the concepts into an algorithm.

    The 1953 article is remarkable, not only because it led to the Metropolis algorithm, but also as one of the earliest examples of using a digital computer to simulate a physical system. The main innovation of this work was in developing “importance sampling”. Instead of sampling from random configurations, it samples with a bias toward physically important configurations which contribute more towards the integral.

    Furthermore, the article also introduced another computational trick, known as “periodic boundary conditions” (PBCs): a set of conditions which are often used to approximate an infinitely large system by using a small part known as a “unit cell”. Both importance sampling and PBCs went on to become workhorse methods in computational physics.

    In the summer of 1953, physicist Enrico Fermi, Ulam, Tsingou and physicist John Pasta also made a significant breakthrough using the MANIAC. They ran a “numerical experiment” as part of a series meant to illustrate possible uses of electronic computers in studying various physical phenomena.

    The team modelled a 1D chain of oscillators with a small nonlinearity to see if it would behave as hypothesized, reaching an equilibrium with the energy redistributed equally across the modes (doi.org/10.2172/4376203). However, their work showed that this was not guaranteed for small perturbations – a non-trivial and non-intuitive observation that would not have been apparent without the simulations. It is the first example of a physics discovery made not by theoretical or experimental means, but through a computational approach. It would later lead to the discovery of solitons and integrable models, the development of chaos theory, and a deeper understanding of ergodic limits.

    Although the paper says the work was done by all four scientists, Tsingou’s role was forgotten, and the results became known as the Fermi–Pasta–Ulam problem. It was not until 2008, when French physicist Thierry Dauxois advocated for giving her credit in a Physics Today article, that Tsingou’s contribution was properly acknowledged. Today the finding is called the Fermi–Pasta–Ulam–Tsingou problem.

    The year 1953 also saw IBM’s first commercial, fully electronic computer – an IBM 701 – arrive at Los Alamos. Soon the theoretical division had two of these machines, which, alongside the MANIAC, gave the scientists unprecedented computing power. Among those to take advantage of the new devices were Martha Evans (whom very little is known about) and theoretical physicist Francis Harlow, who began to tackle the largely unexplored subject of computational fluid dynamics.

    The idea was to use a mesh of cells through which the fluid, represented as particles, would move. This computational method made it possible to solve complex hydrodynamics problems (involving large distortions and compressions of the fluid) in 2D and 3D. Indeed, the method proved so effective that it became a standard tool in plasma physics where it has been applied to every conceivable topic from astrophysical plasmas to fusion energy.

    The resulting internal Los Alamos report – The Particle-in-cell Method for Hydrodynamic Calculations, published in 1955 – showed Evans as first author and acknowledged eight people (including Evans) for the machine calculations. However, while Harlow is remembered as one of the pioneers of computational fluid dynamics, Evans was forgotten.

    A clear-cut division of labour?

    In an age where women had very limited access to the frontlines of research, the computational war effort brought many female researchers and technical staff in. As their contributions come more into the light, it becomes clearer that their role was not a simple clerical one.

    Three black and white photos of people operating a large room-sized computer
    Skilled role Operating the ENIAC required an analytical mind as well as technical skills. (Top) Irwin Goldstein setting the switches on one of the ENIAC’s function tables at the Moore School of Electrical Engineering in 1946. (Middle) Gloria Gordon (later Bolotsky – crouching) and Ester Gerston (standing) wiring the right side of the ENIAC with a new program, c. 1946. (Bottom) Glenn A Beck changing a tube on the ENIAC. Replacing a bad tube meant checking among the ENIAC’s 19,000 possibilities. (Courtesy: US Army / Harold Breaux; US Army / ARL Technical Library; US Army)

    There is a view that the coders’ work was “the vital link between the physicist’s concepts (about which the coders more often than not didn’t have a clue) and their translation into a set of instructions that the computer was able to perform, in a language about which, more often than not, the physicists didn’t have a clue either”, as physicists Giovanni Battimelli and Giovanni Ciccotti wrote in 2018 (Eur. Phys. J. H 43 303). But the examples we have seen show that some of the coders had a solid grasp of the physics, and some of the physicists had a good understanding of the machine operation. Rather than a skilled–non-skilled/men–women separation, the division of labour was blurred. Indeed, it was more of an effective collaboration between physicists, mathematicians and engineers.

    Even in the early days of the T-5 division before electronic computers existed, Livesay and Ewing, for example, attended maths lectures from von Neumann, and introduced him to punch-card operations. As has been documented in books including Their Day in the Sun by Ruth Howes and Caroline Herzenberg, they also took part in the weekly colloquia held by J Robert Oppenheimer and other project leaders. This shows they should not be dismissed as mere human calculators and machine operators who supposedly “didn’t have a clue” about physics.

    Verna Ellingson (Gardiner) is another forgotten coder who worked at Los Alamos. While little information about her can be found, she appears as the last author on a 1955 paper (Science 122 465) written with Metropolis and physicist Joseph Hoffman – “Study of tumor cell populations by Monte Carlo methods”. The next year she was first author of “On certain sequences of integers defined by sieves” with mathematical physicist Roger Lazarus, Metropolis and Ulam (Mathematics Magazine 29 117). She also worked with physicist George Gamow on attempts to discover the code for DNA selection of amino acids, which just shows the breadth of projects she was involved in.

    Evans not only worked with Harlow but took part in a 1959 conference on self-organizing systems, where she queried AI pioneer Frank Rosenblatt on his ideas about human and machine learning. Her attendance at such a meeting, in an age when women were not common attendees, implies we should not view her as “just a coder”.

    With their many and wide-ranging contributions, it is more than likely that Evans, Gardiner, Tsingou and many others were full-fledged researchers, and were perhaps even the first computational scientists. “These women were doing work that modern computational physicists in the [Los Alamos] lab’s XCP [Weapons Computational Physics] Division do,” says Nicholas Lewis, a historian at Los Alamos. “They needed a deep understanding of both the physics being studied, and of how to map the problem to the particular architecture of the machine being used.”

    An evolving identity

    Black and white photo of a woman using equipment to punch a program onto paper tape
    What’s in a name Marjory Jones (later Devaney), a mathematician, shown in 1952 punching a program onto paper tape to be loaded into the MANIAC. The name of this role evolved to programmer during the 1950s. (Courtesy: US government / Los Alamos National Laboratory)

    In the 1950s there was no computational physics or computer science, therefore it’s unsurprising that the practitioners of these disciplines went by different names, and their identity has evolved over the decades since.

    1930s–1940s

    Originally a “computer” was a person doing calculations by hand or with the help of a mechanical calculator.

    Late 1940s – early 1950s

    A “coder” was a person who translated mathematical concepts into a set of instructions in machine language. John von Neumann and Herman Goldstine distinguished between “coding” and “planning”, with the former being the lower-level work of turning flow diagrams into machine language (and doing the physical configuration) while the latter did the mathematical analysis of the problem.

    Meanwhile, an “operator” would physically handle the computer (replacing punch cards, doing the rewiring, etc). In the late-1940s coders were also operators.

    As historians note in the book ENIAC in Action this was an age where “It was hard to devise the mathematical treatment without a good knowledge of the processes of mechanical computation…It was also hard to operate the ENIAC without understanding something about the mathematical task it was undertaking.”

    For the ENIAC a “programmer” was not a person but “a unit combining different sequences in a coherent computation”. The term would later shift and eventually overlap with the meaning of coder as a person’s job.

    1960s

    Computer scientist Margaret Hamilton, who led the development of the on-board flight software for NASA’s Apollo program, coined the term “software engineering” to distinguish the practice of designing, developing, testing and maintaining software from the engineering tasks associated with the hardware.

    1980s – early 2000s

    Using the term “programmer” for someone who coded computers peaked in popularity in the 1980s, but by the 2000s was replaced in favour of other job titles such as various flavours of “developer” or “software architect”.

    Early 2010s

    A “research software engineer” is a person who combines professional software engineering expertise with an intimate understanding of scientific research.

    Overlooked then, overlooked now

    Credited or not, these pioneering women and their contributions have been mostly forgotten, and only in recent decades have their roles come to light again. But why were they obscured by history in the first place?

    Secrecy and sexism seem to be the main factors at play. For example, Livesay was not allowed to pursue a PhD in mathematics because she was a woman, and in the cases of the many married couples, the team contributions were attributed exclusively to the husband. The existence of the Manhattan Project was publicly announced in 1945, but documents that contain certain nuclear-weapons-related information remain classified today. Because these are likely to remain secret, we will never know the full extent of these pioneers’ contributions.

    But another often overlooked reason is the widespread underappreciation of the key role of computational scientists and research software engineers, a term that was only coined just over a decade ago. Even today, these non-traditional research roles end up being undervalued. A 2022 survey by the UK Software Sustainability Institute, for example, showed that only 59% of research software engineers were named as authors, with barely a quarter (24%) mentioned in the acknowledgements or main text, while a sixth (16%) were not mentioned at all.

    The separation between those who understand the physics and those who write the code, understand and operate the hardware goes back to the early days of computing (see box above), but it wasn’t entirely accurate even then. People who implement complex scientific computations are not just coders or skilled operators of supercomputers, but truly multidisciplinary scientists who have a deep understanding of the scientific problems, mathematics, computational methods and hardware.

    Such people – whatever their gender – play a key role in advancing science and yet remain the unsung heroes of the discoveries their work enables. Perhaps what this story of the forgotten pioneers of computational physics tells us is that some views rooted in the 1950s are still influencing us today. It’s high time we moved on.

    The post The forgotten pioneers of computational physics appeared first on Physics World.

    https://physicsworld.com/a/the-forgotten-pioneers-of-computational-physics/
    No Author

    Classical gravity may entangle matter, new study claims

    Surprising result could guide searches for quantum gravity

    The post Classical gravity may entangle matter, new study claims appeared first on Physics World.

    Gravity might be able to quantum-entangle particles even if the gravitational field itself is classical. That is the conclusion of a new study by Joseph Aziz and Richard Howl at Royal Holloway University of London. This challenges a popular view that such entanglement would necessarily imply that gravity must be quantized. This could be important in the ongoing attempt to develop a theory of quantum gravity that unites quantum mechanics with Einstein’s general theory of relativity.

    “When you try to quantize the gravitational interaction in exactly the same way we tried to mathematically quantize the other forces, you end up with mathematically inconsistent results – you end up with infinities in your calculations that you can’t do anything about,” Howl tells Physics World.

    “With the other interactions, we quantized them assuming they live within an independent background of classical space and time,” Howl explains. “But with quantum gravity, arguably you cannot do this [because] gravity describes space−time itself rather than something within space−time.”

    Quantum entanglement occurs when two particles share linked quantum states even when separated. While it has become a powerful probe of the gravitational field, the central question is whether gravity can mediate entanglement only if it is itself quantum in nature.

    General treatment

    “It has generally been considered that the gravitational interaction can only entangle matter if the gravitational field is quantum,” Howl says. “We have argued that you could treat the gravitational interaction as more general than just the mediation of the gravitational field such that even if the field is classical, you could in principle entangle matter.”

    Quantum field theory postulates that entanglement between masses arises through the exchange of virtual gravitons. These are hypothetical, transient quantum excitations of the gravitational field. Aziz and Howl propose that even if the field remains classical, virtual-matter processes can still generate entanglement indirectly. These processes, he says, “will persist even when the gravitational field is considered classical and could in principle allow for entanglement”.

    The idea of probing the quantum nature of gravity through entanglement goes back to a suggestion by Richard Feynman in the 1950s. He envisioned placing a tiny mass in a superposition of two locations and checking whether its gravitational field was also superposed. Though elegant, the idea seemed untestable at the time.

    Recent proposals − most notably by teams led by Sougato Bose and by Chiara Marletto and Vlatko Vedral – revived Feynman’s insight in a more practical form.

    Feasible tests

    “Recently, two proposals showed that one way you could test that the field is in a superposition (and thus quantum) is by putting two masses in a quantum superposition of two locations and seeing if they become entangled through the gravitational interaction,” says Howl. “This also seemed to be much more feasible than Feynman’s original idea.” Such experiments might use levitated diamonds, metallic spheres, or cold atoms – systems where both position and gravitational effects can be precisely controlled.

    Aziz and Howl’s work, however, considers whether such entanglement could arise even if gravity is not quantum. They find that certain classical-gravity processes can in principle entangle particles, though the predicted effects are extremely small.

    “These classical-gravity entangling effects are likely to be very small in near-future experiments,” Howl says. “This though is actually a good thing: it means that if we see entanglement…we can be confident that this means that gravity is quantized.”

    The paper has drawn a strong response from some leading figures in the field, including Marletto at the University of Oxford, who co-developed the original idea of using gravitationally induced entanglement as a test of quantum gravity.

    “The phenomenon of gravitationally induced entanglement … is a game changer in the search for quantum gravity, as it provides a way to detect quantum effects in the gravitational field indirectly, with laboratory-scale equipment,” she says. Detecting it would, she adds, “constitute the first experimental confirmation that gravity is quantum, and the first experimental refutation of Einstein’s relativity as an adequate theory of gravity”.

    However, Marletto disputes Aziz and Howl’s interpretation. “No classical theory of gravity can mediate entanglement via local means, contrary to what the study purports to show,” she says. “What the study actually shows is that a classical theory with direct, non-local interactions between the quantum probes can get them entangled.” In her view, that mechanism “is not new and has been known for a long time”.

    Despite the controversy, Howl and Marletto agree that experiments capable of detecting gravitationally induced entanglement would be transformative. “We see our work as strengthening the case for these proposed experiments,” Howl says. Marletto concurs that “detecting gravitationally induced entanglement will be a major milestone … and I hope and expect it will happen within the next decade.”

    Howl hopes the work will encourage further discussion about quantum gravity. “It may also lead to more work on what other ways you could argue that classical gravity can lead to entanglement,” he says.

    The research is described in Nature.

    The post Classical gravity may entangle matter, new study claims appeared first on Physics World.

    https://physicsworld.com/a/classical-gravity-may-entangle-matter-new-study-claims/
    No Author

    Is Donald Trump conducting a ‘blitzkrieg’ on science?

    The US High Energy Physics Advisory Panel has been dissolved for reasons of politics, not efficiency, says Robert P Crease

    The post Is Donald Trump conducting a ‘blitzkrieg’ on science? appeared first on Physics World.

    “Drain the swamp!”

    In the intense first few months of his second US presidency, Donald Trump has been enacting his old campaign promise with a vengeance. He’s ridding all the muck from the American federal bureaucracy, he claims, and finally bringing it back under control.

    Scientific projects and institutions are particular targets of his, with one recent casualty being the High Energy Physics Advisory Panel (HEPAP). Outsiders might shrug their shoulders at a panel of scientists being axed. Panels come and go. Also, any development in Washington these days is accompanied by confusion, uncertainty, and the possibility of reversal.

    But HEPAP’s dissolution is different. Set up in 1967, it’s been a valuable and long-standing advisory committee of the Office of Science at the US Department of Energy (DOE). HEPAP has a distinguished track record of developing, supporting and reviewing high-energy physics programmes, setting priorities and balancing different areas. Many scientists are horrified by its axing.

    The terminator

    Since taking office in January 2025, Trump has issued a flurry of executive orders – presidential decrees that do not need Congressional approval, legislative review or public debate. One order, which he signed in February, was entitled “Commencing the Reduction of the Federal Bureaucracy”.

    It sought to reduce parts of the government “that the President has determined are unnecessary”, seeking to eliminate “waste and abuse, reduce inflation, and promote American freedom and innovation”. While supporters see those as laudable goals, opponents believe the order is driving a stake into the heart of US science.

    Hugely valuable, long-standing scientific advisory committees have been axed at key federal agencies, including NASA, the National Science Foundation, the Environmental Protection Agency, the National Oceanic and Atmospheric Administration, the US Geological Service, the National Institute of Health, the Food and Drug Administration, and the Centers for Disease Control and Prevention.

    What’s more, the committees were terminated without warning or debate, eliminating load-bearing pillars of the US science infrastructure. It was, as the Columbia University sociologist Gil Eyal put it in a recent talk, the “Trump 2.0 Blitzkrieg”.

    Then, on 30 September, Trump’s enablers took aim at advisory committees at the DOE Office of Science. According to the DOE’s website, a new Office of Science Advisory Committee (SCAC) will take over functions of the six former discretionary (non-legislatively mandated) Office of Science advisory committees.

    “Any current charged responsibilities of these former committees will be transferred to the SCAC,” the website states matter-of-factly. The committee will provide “independent, consensus advice regarding complex scientific and technical issues” to the entire Office of Science. Its members will be appointed by under secretary for science Dario Gil – a political appointee.

    Apart from HEPAP, others axed without warning were the Nuclear Science Advisory Committee, the Basic Energy Sciences Advisory Committee, the Fusion Energy Sciences Advisory Committee, the Advanced Scientific Computing Advisory Committee, and the Biological and Environmental Research Advisory Committee.

    Over the years, each committee served a different community and was represented by prominent research scientists who were closely in touch with other researchers. Each committee could therefore assemble the awareness of – and technical knowledge about – emerging promising initiatives and identify the less promising ones.

    Many committee members only learned of the changes when they received letters or e-mails out of the blue informing them that their committee had been dissolved, that a new committee had replaced them, and that they were not on it. No explanation was given.

    Closing HEPAP and the other Office of Science committees will hamper both the technical support and community input that it has relied on to promote the efficient, effective and robust growth of physics

    Physicists whom I have spoken to are appalled for two main reasons. One is that closing HEPAP and the other Office of Science committees will hamper both the technical support and community input that it has relied on to promote the efficient, effective and robust growth of physics.

    “Speaking just for high-energy physics, HEPAP gave feedback on the DOE and NSF funding strategies and priorities for the high-energy physics experiments,” says Kay Kinoshita from the University of Cincinnati, a former HEPAP member. “The panel system provided a conduit for information between the agencies and the community, so the community felt heard and the agencies were (mostly) aligned with the community consensus”.

    As Kinoshita continued: “There are complex questions that each panel has to deal with. even within the topical area. It’s hard to see how a broader panel is going to make better strategic decisions, ‘better’ meaning in terms of scientific advancement. In terms of community buy-in I expect it will be worse.”

    Other physicists cite a second reason for alarm. The elimination of the advisory committees spreads the expertise so thinly as to increase the likelihood of political pressure on decisions. “If you have one committee you are not going to get the right kind of fine detail,” says Michael Lubell, a physicist and science-policy expert at the City College of New York, who has sat in on meetings of most of the Office of Science advisory committees.

    “You’ll get opinions from people outside that area and you won’t be able to get information that you need as a policy maker to decide how the resources are to be allocated,” he adds. “A condensed-matter physicist for example, would probably have insufficient knowledge to advise DOE on particle physics. Instead, new committee members would be expected to vet programs based on ideological conformity to what the Administration wants.”

    The critical point

    At the end of the Second World War, the US began to construct an ambitious long-range plan to promote science that began with the establishment of the National Science Foundation in 1950 and developed and extended ever since. The plan aimed to incorporate both the ability of elected politicians to direct science towards social needs and the independence of scientists to explore what is possible.

    US presidents have, of course, had pet scientific projects: the War on Cancer (Nixon), the Moon Shot (Kennedy), promoting renewable energy (Carter), to mention a few. But it is one thing for a president to set science to producing a socially desirable product and another to manipulate the scientific process itself.

    “This is another sad day for American science,” says Lubell. “If I were a young person just embarking on a career, I would get the hell out of the country. I would not want to waste the most creative years of my life waiting for things to turn around, if they ever do. What a way to destroy a legacy!”

    The end of HEPAP is not draining a swamp but creating one.

    The post Is Donald Trump conducting a ‘blitzkrieg’ on science? appeared first on Physics World.

    https://physicsworld.com/a/is-donald-trump-conducting-a-blitzkrieg-on-science/
    Robert P Crease

    Delft Circuits, Bluefors: the engine-room driving joined-up quantum innovation

    Technology partners will focus on scalable cryogenic I/O cabling assemblies for next-generation quantum computing systems

    The post Delft Circuits, Bluefors: the engine-room driving joined-up quantum innovation appeared first on Physics World.

    delft-circuits-cri/oflex cabling technology
    At-scale quantum By integrating Delft Circuits’ Cri/oFlex® cabling technology (above) into Bluefors’ dilution refrigerators, the vendors’ combined customer base will benefit from an industrially proven and fully scalable I/O solution for their quantum systems. Cri/oFlex® cabling combines fully integrated filtering with a compact footprint and low heatload. (Courtesy: Delft Circuits)

    Better together. That’s the headline take on a newly inked technology partnership between Bluefors, a heavyweight Finnish supplier of cryogenic measurement systems, and Delft Circuits, a Dutch manufacturer of specialist I/O cabling solutions designed for the scale-up and industrial deployment of next-generation quantum computers.

    The drivers behind the tie-up are clear: as quantum systems evolve – think vastly increased qubit counts plus ever-more exacting requirements on gate fidelity – developers in research and industry will reach a point where current coax cabling technology doesn’t cut it anymore. The answer? Collaboration, joined-up thinking and product innovation.

    In short, by integrating Delft Circuits’ Cri/oFlex® cabling technology into Bluefors’ dilution refrigerators, the vendors’ combined customer base will benefit from a complete, industrially proven and fully scalable I/O solution for their quantum systems. The end-game: to overcome the quantum tech industry’s biggest bottleneck, forging a development pathway from quantum computing systems with hundreds of qubits today to tens of thousands of qubits by 2030.

    Joined-up thinking

    For context, Cri/oFlex® cryogenic RF cables comprise a stripline (a type of transmission line) based on planar microwave circuitry – essentially a conducting strip encapsulated in dielectric material and sandwiched between two conducting ground planes. The use of the polyimide Kapton® as the dielectric ensures Cri/oFlex® cables remain flexible in cryogenic environments (which are necessary to generate quantum states, manipulate them and read them out), with silver or superconducting NbTi providing the conductive strip and ground layer. The standard product comes as a multichannel flex (eight channels per flex) with a range of I/O channel configurations tailored to the customer’s application needs, including flux bias lines, microwave drive lines, signal lines or read-out lines.

    Robby Ferdinandus of Delft Circuits
    “Together with Bluefors, we will accelerate the journey to quantum advantage,” says Robby Ferdinandus of Delft Circuits. (Courtesy: Delft Circuits)

    “Reliability is a given with Cri/oFlex®,” says Robby Ferdinandus, global chief commercial officer for Delft Circuits and a driving force behind the partnership with Bluefors. “By integrating components such as attenuators and filters directly into the flex,” he adds, “we eliminate extra parts and reduce points of failure. Combined with fast thermalization at every temperature stage, our technology ensures stable performance across thousands of channels, unmatched by any other I/O solution.”

    Technology aside, the new partnership is informed by a “one-stop shop” mindset, offering the high-density Cri/oFlex® solution pre-installed and fully tested in Bluefors cryogenic measurement systems. For the end-user, think turnkey efficiency: streamlined installation, commissioning, acceptance and, ultimately, enhanced system uptime.

    Scalability is front-and-centre too, thanks to Delft Circuits’ pre-assembled and tested side-loading systems. The high-density I/O cabling solution delivers up to 50% more channels per side-loading port to Bluefors’ (current) High Density Wiring, providing a total of 1536 input or control lines to an XLDsl cryostat. In addition, more wiring lines can be added to multiple KF ports as a custom option.

    Doubling up for growth

    Reetta Kaila of Bluefors
    “Our market position in cryogenics is strong, so we have the ‘muscle’ and specialist know-how to integrate innovative technologies like Cri/oFlex®,” says Reetta Kaila of Bluefors. (Courtesy: Bluefors)

    Reciprocally, there’s significant commercial upside to this partnership. Bluefors is the quantum industry’s leading cryogenic systems OEM and, by extension, Delft Circuits now has access to the former’s established global customer base, amplifying its channels to market by orders of magnitude. “We have stepped into the big league here and, working together, we will ensure that Cri/oFlex® becomes a core enabling technology on the journey to quantum advantage,” notes Ferdinandus.

    That view is amplified by Reetta Kaila, director for global technical sales and new products at Bluefors (and, alongside Ferdinandus, a main-mover behind the partnership). “Our market position in cryogenics is strong, so we have the ‘muscle’ and specialist know-how to integrate innovative technologies like Cri/oFlex® into our dilution refrigerators,” she explains.

    A win-win, it seems, along several coordinates. “The Bluefors sales teams are excited to add Cri/oFlex® into the product portfolio,” Kaila adds. “It’s worth noting, though, that the collaboration extends across multiple functions – technical and commercial – and will therefore ensure close alignment of our respective innovation roadmaps.”

    Scalable I/O will accelerate quantum innovation

    Deconstructed, Delft Circuits’ value proposition is all about enabling, from an I/O perspective, the transition of quantum technologies out of the R&D lab into at-scale practical applications. More specifically: Cri/oFlex® technology allows quantum scientists and engineers to increase the I/O cabling density of their systems easily – and by a lot – while guaranteeing high gate fidelities (minimizing noise and heating) as well as market-leading uptime and reliability.

    To put some hard-and-fast performance milestones against that claim, the company has published a granular product development roadmap that aligns Cri/oFlex® cabling specifications against the anticipated evolution of quantum computing systems –  from 150+ qubits today out to 40,000 qubits and beyond in 2029 (see figure below, “Quantum alignment”).

    The resulting milestones are based on a study of the development roadmaps of more than 10 full-stack quantum computing vendors – a consolidated view that will ensure the “guiding principles” of Delft Circuits’ innovation roadmap align versus the aggregate quantity and quality of qubits targeted by the system developers over time.

    delft circuits roadmap
    Quantum alignment The new product development roadmap from Delft Circuits starts with the guiding principles, highlighting performance milestones to be achieved by the quantum computing industry over the next five years – specifically, the number of physical qubits per system and gate fidelities. By extension, cabling metrics in the Delft Circuits roadmap focus on “quantity”: the number of I/O channels per loader (i.e. the wiring trees that insert into a cryostat, with typical cryostats having between 6–24 slots for loaders) and the number of channels per cryostat (summing across all loaders); also on “quality” (the crosstalk in the cabling flex). To complete the picture, the roadmap outlines product introductions at a conceptual level to enable both the quantity and quality timelines. (Courtesy: Delft Circuits)

    The post Delft Circuits, Bluefors: the engine-room driving joined-up quantum innovation appeared first on Physics World.

    https://physicsworld.com/a/delft-circuits-bluefors-the-engine-room-driving-joined-up-quantum-innovation/
    No Author

    Microbubbles power soft, programmable artificial muscles

    Ultrasound-activated microbubble arrays create flexible actuators for applications ranging from soft robotics to minimally invasive surgery

    The post Microbubbles power soft, programmable artificial muscles appeared first on Physics World.

    Ultrasound-powered soft surgical robot
    Ultrasound-powered stingraybot A bioinspired soft surgical robot with artificial muscles made from microbubble arrays swims forward under swept-frequency ultrasound excitation. Right panels: motion of the microbubble-array fins during swimming. Lower inset: schematic of the patterned microbubble arrays. Scale bar: 1 cm. (Courtesy: CC BY 4.0/Nature 10.1038/s41586-025-09650-3)

    Artificial muscles that offer flexible functionality could prove invaluable for a range of applications, from soft robotics and wearables to biomedical instrumentation and minimally invasive surgery. Current designs, however, are limited by complex actuation mechanisms and challenges in miniaturization. Aiming to overcome these obstacles, a research team headed up at the Acoustic Robotics Systems Lab (ETH Zürich) in Switzerland is using microbubbles to create soft, programmable artificial muscles that can be wirelessly controlled via targeted ultrasound activation.

    Gas-filled microbubbles can concentrate acoustic energy, providing a means to initiate movement with rapid response times and high spatial accuracy. In this study, reported in Nature, team leader Daniel Ahmed and colleagues built a synthetic muscle from a thin flexible membrane containing arrays of more than 10,000 microbubbles. When acoustically activated, the microbubbles generate thrust and cause the membrane to deform. And as different sized microbubbles resonate at different ultrasound frequencies, the arrays can be designed to provide programmable motion.

    “Ultrasound is safe, non-invasive, can penetrate deep into the body and can generate large forces. However, without microbubbles, a much higher force is needed to deform the muscle, and selective activation is difficult,” Ahmed explains. “To overcome this limitation, we use microbubbles, which amplify force generation at specific sites and act as resonant systems. As a result, we can activate the artificial muscle at safe ultrasound power levels and generate complex motion.”

    The team created the artificial muscles from a thin silicone membrane patterned with an array of cylindrical microcavities with the dimensions of the desired microbubbles. Submerging this membrane in a water-filled acoustic chamber trapped tens of thousands of gas bubbles within the cavities (one per cavity). The final device contains around 3000 microbubbles per mm2 and weighs just 0.047 mg/mm2.

    To demonstrate acoustic activation, the researchers fabricated an artificial muscle containing uniform-sized microbubbles on one surface. They fixed one end of the muscle and exposed it to resonant frequency ultrasound, simultaneously exciting the entire microbubble array. The resulting oscillations generated acoustic streaming and radiation forces, causing the muscle to flex upward, with an amplitude dependent upon the ultrasound excitation voltage.

    Next, the team designed an 80 µm-thick, 3 x 0.5 cm artificial muscle containing arrays of three different sized microbubbles. Stimulation at 96.5, 82.3 and 33.2 kHz induced deformations in regions containing bubbles with diameters of 12, 16 and 66 µm, respectively. Exposure to swept-frequency ultrasound covering the three resonant frequencies sequentially activated the different arrays, resulting in an undulatory motion.

    Microbubble-array artificial muscles
    Microbubble muscles (a) Artificial muscle with thousands of microbubbles on its lower surface bends upwards when excited by ultrasound. (b) Artificial muscle containing arrays of microbubbles with three different diameters, each corresponding to a distinct natural frequency, exhibits undulatory motion (c) under swept-frequency ultrasound excitation. (Courtesy: CC BY 4.0/Nature 10.1038/s41586-025-09650-3)

    A multitude of functions

    Ahmed and colleagues showcased a range of applications for the artificial muscle by integrating microbubble arrays into functional devices, such as a miniature soft gripper for trapping and manipulating fragile live animals. The gripper comprises six to ten microbubble array-based “tentacles” that, when subjected to ultrasound, gently gripped a zebrafish larva with sub-100 ms response time. When the ultrasound was switched off, the tentacles opened and the larva swam away with no adverse effects.

    The artificial muscle can function as a conformable robotic skin that sticks and imparts motion to a stationary object, which the team demonstrated by attaching it to the surface of an excised pig heart. It can also be employed for targeted drug delivery – shown by the use of a microbubble-array robotic patch for ultrasound-enhanced delivery of dye into an agar block.

    The researchers also built an ultrasound-powered “stingraybot”, a soft surgical robot with artificial muscles (arrays of differently sized microbubbles) on either side to mimic the pectoral fins of a stingray. Exposure to swept-frequency ultrasound induced an undulatory motion that wirelessly propelled the 4 cm-long robot forward at a speed of about 0.8 body lengths per second.

    To demonstrate future practical biomedical applications, such as supporting minimally invasive surgery or site-specific drug release within the gastrointestinal tract, the researchers encapsulated a rolled up stingraybot within a 27 x 12 mm edible capsule. Once released into the stomach, the robot could be propelled on demand under ultrasound actuation. They also pre-folded a linear artificial muscle into a wheel shape and showed that swept ultrasound frequencies could propel it along the complex mucosal surfaces of the stomach and intestine.

    “Through the strategic use of microbubble configurations and voltage and frequency as ultrasound excitation parameters, we engineered a diverse range of preprogrammed movements and demonstrated their applicability across various robotic platforms,” the researchers write. “Looking ahead, these artificial muscles hold transformative potential across cutting-edge fields such as soft robotics, haptic medical devices and minimally invasive surgery.”

    Ahmed says that the team is currently developing soft patches that can conform to biological surfaces for drug delivery inside the bladder. “We are also designing soft, flexible robots that can wrap around a tumour and release drugs directly at the target site,” he tells Physics World. “Basically, we’re creating mobile conformable drug-delivery patches.”

    The post Microbubbles power soft, programmable artificial muscles appeared first on Physics World.

    https://physicsworld.com/a/microbubbles-power-soft-programmable-artificial-muscles/
    Tami Freeman

    China’s Shenzhou-20 crewed spacecraft return delayed by space debris impact

    Fears that the craft has been struck by a small piece of debris

    The post China’s Shenzhou-20 crewed spacecraft return delayed by space debris impact appeared first on Physics World.

    China has delayed the return of a crewed mission to the country’s space station over fears that the astronaut’s spacecraft has been struck by space debris. The craft was supposed to return to Earth on 5 November but the China Manned Space Agency says it will now carry out an impact analysis and risk assessment before making any further decisions about when the astronauts will return.

    The Shenzhou programme involves taking astronauts to and from China’s Tiangong space station, which was constructed in 2022, for six-month stays.

    Shenzhou-20, carrying three crew, launched on 24 April from Jiuquan Satellite Launch Center on board a Long March 2F rocket. Once docked with Tiangong the three-member crew of Shenzhou-19 began handing over control of the station to the crew of Shenzhou-20 before they returned to Earth on 30 April.

    The three-member crew of Shenzhou-21 launched on 31 October and underwent the same hand-over process with the crew of Shenzhou-20 before they were set to return to Earth on Wednesday.

    Yet pre-operation checks revealed that the craft had been hit by “a small piece of debris” with the location and scale of the damage to Shenzhou-20 having not been released.

    If the craft is deemed unsafe following the assessment, it is possible that the crew of Shenzhou-20 will return to Earth aboard Shenzhou-21. Another option is to launch a back-up Shenzhou spacecraft, which remains on stand-by and could be launched within eight days.

    Space debris is of increasing concern and this marks the first time that a crewed craft has been delayed due to a potential space debris impact. In 2021, for example, China noted that Tiangong had to perform two emergency avoidance manoeuvres to avoid fragments produced by Starlink satellites that were launched by SpaceX.

    • For more on the impact of space debris, sign-up for a Physics World Live event on “Space junk – and how to solve it” on 10 November at 9 p.m. GMT.

    The post China’s Shenzhou-20 crewed spacecraft return delayed by space debris impact appeared first on Physics World.

    https://physicsworld.com/a/chinas-shenzhou-20-crewed-spacecraft-return-delayed-by-space-debris-impact/
    Michael Banks

    Twistelastics controls how mechanical waves move in metamaterials

    New technique could deliver reconfigurable phononic devices with myriad applications

    The post Twistelastics controls how mechanical waves move in metamaterials appeared first on Physics World.

    twisted surfaces can be used to manipulate mechanical waves
    How it works Researchers use twisted surfaces to manipulate mechanical waves, enabling new technologies for imaging, electronics and sensors. (Courtesy: A Alù)

    By simply placing two identical elastic metasurfaces atop each other and then rotating them relative to each other, the topology of the elastic waves dispersing through the resulting stacked structure can be changed – from elliptic to hyperbolic. This new control technique, from physicists at the CUNY Advanced Science Research Center in the US, works over a broad frequency range and has been dubbed “twistelastics”. It could allow for advanced reconfigurable phononic devices with potential applications in microelectronics, ultrasound sensing and microfluidics.

    The researchers, led by Andrea Alù, say they were inspired by the recent advances in “twistronics” and its “profound impact” on electronic and photonic systems. “Our goal in this work was to explore whether similar twist-induced topological phenomena could be harnessed in elastodynamics in which phonons (vibrations of the crystal lattice) play a central role,” says Alù.

    In twistelastics, the rotations between layers of identical, elastic engineered surfaces are used to manipulate how mechanical waves travel through the materials. The new approach, say the CUNY researchers, allows them to reconfigure the behaviour of these waves and precisely control them. “This opens the door to new technologies for sensing, communication and signal processing,” says Alù.

    From elliptic to hyperbolic

    In their work, the researchers used computer simulations to design metasurfaces patterned with micron-sized pillars. When they stacked one such metasurface atop the other and rotated them at different angles, the resulting combined structure changed the way phonons spread. Indeed, their dispersion topology went from elliptic to hyperbolic.

    At a specific rotation angle, known as the “magic angle” (just like in twistronics), the waves become highly focused and begin to travel in one direction. This effect could allow for more efficient signal processing, says Alù, with the signals being easier to control over a wide range of frequencies.

    The new twistelastic platform offers broadband, reconfigurable, and robust control over phonon propagation,” he tells Physics World. “This may be highly useful for a wide range of application areas, including surface acoustic wave (SAW) technologies, ultrasound imaging and sensing, microfluidic particle manipulation and on-chip phononic signal processing.

    New frontiers

    Since the twist-induced transitions are topologically protected, again like in twistronics, the system is resilient to fabrication imperfections, meaning it can be miniaturized and integrated into real-world devices, he adds. “We are part of an exciting science and technology centre called ‘New Frontiers of Sound’, of which I am one of the leaders. The goal of this ambitious centre is to develop new acoustic platforms for the above applications enabling disruptive advances for these technologies.”

    Looking ahead, the researchers say they are looking into miniaturizing their metasurface design for integration into microelectromechanical systems (MEMS). They will also be studying multi-layer twistelastic architectures to improve how they can control wave propagation and investigating active tuning mechanisms, such as electromechanical actuation, to dynamically control twist angles. “Adding piezoelectric phenomena for further control and coupling to the electromagnetic waves,” is also on the agenda says Alù.

    The present work is detailed in PNAS.

    The post Twistelastics controls how mechanical waves move in metamaterials appeared first on Physics World.

    https://physicsworld.com/a/twistelastics-controls-how-mechanical-waves-move-in-metamaterials/
    Isabelle Dumé

    Ternary hydride shows signs of room-temperature superconductivity at high pressures

    New alloy is made by doping scandium into the well-known La-H binary system

    The post Ternary hydride shows signs of room-temperature superconductivity at high pressures appeared first on Physics World.

    Crystal lattice structure of a new high-temperature superconductor
    Crystal structure In the new high-Tc superconductor, lanthanum and scandium atoms constitute the MgB2-type sublattice, while the surrounding hydrogen atoms form two types of cage-like configurations. (Courtesy: Guangtao Liu, Jilin University)

    Researchers in China claim to have made the first ever room-temperature superconductor by compressing an alloy of lanthanum-scandium (La-Sc) and the hydrogen-rich material ammonia borane (NH3BH3) together at pressures of 250–260 GPa, observing superconductivity with a maximum onset temperature of 298 K. While these high pressures are akin to those at the centre of the Earth, the work marks a milestone in the field of superconductivity, they say.

    Superconductors conduct electricity without resistance and many materials do this when cooled below a certain transition temperature, Tc. In most cases this temperature is very low – for example, solid mercury, the first superconductor to be discovered, has a Tc of 4.2 K. Researchers have therefore been looking for superconductors that operate at higher temperatures – perhaps even at room temperature. Such materials could revolutionize a host of application areas, including increasing the efficiency of electrical generators and transmission lines through lossless electricity transmission. They would also greatly simplify technologies such as MRI, for instance, that rely on the generation or detection of magnetic fields.

    Researchers made considerable progress towards this goal in the 1980s and 1990s with the discovery of the “high-temperature” copper oxide superconductors, which have Tc values between 30 and 133 K. Fast-forward to 2015 and the maximum known critical temperature rose even higher thanks to the discovery of a sulphide material, H3S, that has a Tc of 203 K when compressed to pressures of 150 GPa.

    This result sparked much interest in solid materials containing hydrogen atoms bonded to other elements and in 2019, the record was broken again, this time by lanthanum decahydride (LaH10), which was found to have a Tc of 250–260 K, albeit again at very high pressures. Then in 2021, researchers observed high-temperature superconductivity in the cerium hydrides, CeH9 and CeH10, which are remarkable because they are stable and boast high-temperature superconductivity at lower pressures (about 80 GPa, or 0.8 million atmospheres) than the other so-called “superhydrides”.

    Ternary hydrides

    In recent years, researchers have started turning their attention to ternary hydrides – substances that comprise three different atomic species rather than just two. Compared with binary hydrides, ternary hydrides are more structurally complex, which may allow them to have higher Tc values. Indeed, Li2MgH16 has been predicted to exhibit “hot” superconductivity with a Tc of 351–473 K under multimegabar pressures and several other high-Tc hydrides, including MBxHy, MBeH8 and Mg2IrH6-7, have been predicted to be stable under comparatively lower pressures.

    In the new work, a team led by physicist Yanming Ma of Jilin University, studied LaSc2H24 – a compound that’s made by doping Sc into the well-known La-H binary system. Ma and colleagues had already predicted in theory – using the crystal structure prediction (CALYPSO) method – that this ternary material should feature a hexagonal P6/mmm symmetry. Introducing Sc into the La-H results in the formation of two novel interlinked H24 and H30 hydrogen clathrate “cages” with the H24 surrounding Sc and the H30 surrounding La.

    The researchers predicted that these two novel hydrogen frameworks should produce an exceptionally large hydrogen-derived density of states at the Fermi level (the highest energy level that electrons can occupy in a solid at a temperature of absolute zero), as well as enhancing coupling between electrons and phonons (vibrations of the crystal lattice) in the material, leading to an exceptionally high Tc of up to 316 K at high pressure.

    To characterize their material, the researchers placed it in a diamond-anvil cell, a device that generates extreme pressures as it squeezes the sample between two tiny, gem-grade crystals of diamond (one of the hardest substances known) while heating it with a laser. In situ X-ray diffraction experiments revealed that the compound crystallizes into a hexagonal structure, in excellent agreement with the predicted P6/mmm LaSc2H24 structure.

    A key piece of experimental evidence for superconductivity in the La-Sc-H ternary system, says co-author Guangtao Liu, came from measurements that repeatedly demonstrated the onset of zero electrical resistance below the Tc.

    Another significant proof, Liu adds, is that the Tc decreases monotonically with the application of an external magnetic field in a number of independently synthesized samples. “This behaviour is consistent with the conventional theory of superconductivity since an external magnetic field disrupts Cooper pairs – the charge carriers responsible for the zero-resistance state – thereby suppressing superconductivity.”

    “These two main observations demonstrate the superconductivity in our synthesized La-Sc-H compound,” he tells Physics World.

    Difficult experiments

    The experiments were not easy, Liu recalls. The first six months of attempting to synthesize LaSc2H24 below 200 GPa yielded no obvious Tc enhancement. “We then tried higher pressure and above 250 GPa, we had to manually deposit three precursor layers and ensure that four electrodes (for subsequent conductance measurements) were properly connected to the alloy in an extremely small sample chamber, just 10 to 15 µm in size,” he says. “This required hundreds of painstaking repetitions.”

    And that was not all: to synthesize the LaSc2H24, the researchers had to prepare the correct molar ratios of a precursor alloy. The Sc and La elements cannot form a solid solution because of their different atomic radii, so using a normal melting method makes it hard to control this ratio. “After about a year of continuous investigations, we finally used the magnetron sputtering method to obtain films of LaSc2H24 with the molar ratios we wanted,” Liu explains. “During the entire process, most of our experiments failed and we ended up damaging at least 70 pairs of diamonds.”

    Sven Friedemann of the University of Bristol, who was not involved in this work, says that the study is “an important step forward” for the field of superconductivity with a new record transition temperature of 295 K. “The new measurements show zero resistance (within resolution) and suppression in magnetic fields, thus strongly suggesting superconductivity,” he comments. “It will be exciting to see future work probing other signatures of superconductivity. The X-ray diffraction measurements could be more comprehensive and leave some room for uncertainty to whether it is indeed the claimed LaSc2H24 structure giving rise to the superconductivity.”

    Ma and colleagues say they will continue to study the properties of this compound – and in particular, verify the isotope effect (a signature of conventional superconductors) or measure the superconducting critical current. “We will also try to directly detect the Meissner effect – a key goal for high-temperature superhydride superconductors in general,” says Ma. “Guided by rapidly advancing theoretical predictions, we will also synthesize new multinary superhydrides to achieve better superconducting properties under much lower pressures.”

    The study is available on the arXiv pre-print server.

    The post Ternary hydride shows signs of room-temperature superconductivity at high pressures appeared first on Physics World.

    https://physicsworld.com/a/ternary-hydride-shows-signs-of-room-temperature-superconductivity-at-high-pressures/
    Isabelle Dumé

    Scientific collaborations increasingly more likely to be led by Chinese scientists, finds study

    Researchers predict that leadership parity will soon be reached between China and the US

    The post Scientific collaborations increasingly more likely to be led by Chinese scientists, finds study appeared first on Physics World.

    International research collaborations will be increasingly led by scientists in China over the coming decade. That is according to a new study by researchers at the University of Chicago, which finds that the power balance in international science has shifted markedly away from the US and towards China over the last 25 years (Proc. Natl. Acad. Sci. 122 e2414893122).

    To explore China’s role in global science, the team used a machine-learning model to predict the lead researchers of almost six million scientific papers that involved international collaboration listed by online bibliographic catalogue OpenAlex. The model was trained on author data from 80 000 papers published in high-profile journals that routinely detail author contributions, including team leadership.

    The study found that between 2010 and 2012 there were only 4429 scientists from China who were likely to have led China-US collaborations. By 2023, this number had risen to 12714, meaning that the proportion of team leaders affiliated with Chinese institutions had risen from 30% to 45%.

    Key areas

    If this trend continues, China will hit “leadership parity” with the US in chemistry, materials science and computer science by 2028, with maths, physics and engineering being level by 2031. The analysis also suggests that China will achieve leadership parity with the US in eight “critical technology” areas by 2030, including AI, semiconductors, communications, energy and high-performance computing.

    For China-UK partnerships, the model found that equality had already been reached in 2019, while EU and China leadership roles will be on par this year or next. The authors also found that China has been actively training scientists in nations in the “Belt and Road Initiative” which seeks to connect China closer to the world through investments and infrastructure projects.

    This, the researchers warn, limits the ability to isolate science done in China. Instead, they suggest that it could inspire a different course of action, with the US and other countries expanding their engagement with the developing world to train a global workforce and accelerate scientific advancements beneficial to their economies.

    The post Scientific collaborations increasingly more likely to be led by Chinese scientists, finds study appeared first on Physics World.

    https://physicsworld.com/a/scientific-collaborations-increasingly-more-likely-to-be-led-by-chinese-scientists-finds-study/
    No Author

    Unlocking the potential of 2D materials: graphene and much more

    This podcast features Antonio Rossi at the Italian Institute of Technology

    The post Unlocking the potential of 2D materials: graphene and much more appeared first on Physics World.

    This episode explores the scientific and technological significance of 2D materials such as graphene. My guest is Antonio Rossi, who is a researcher in 2D materials engineering at the Italian Institute of Technology in Genoa.

    Rossi explains why 2D materials are fundamentally different than their 3D counterparts – and how these differences are driving scientific progress and the development of new and exciting technologies.

    Graphene is the most famous 2D material and Rossi talks about today’s real-world applications of graphene in coatings. We also chat about the challenges facing scientists and engineers who are trying to exploit graphene’s unique electronic properties.

    Rossi’s current research focuses on two other promising 2D materials – tungsten disulphide and hexagonal boron nitride. He explains why tungsten disulphide shows great technological promise because of its favourable electronic and optical properties; and why hexagonal boron nitride is emerging as an ideal substrate for creating 2D devices.

    Artificial intelligence (AI) is becoming an important tool in developing new 2D materials. Rossi explains how his team is developing feedback loops that connect AI with the fabrication and characterization of new materials. Our conversation also touches on the use of 2D materials in quantum science and technology.

    IOP Publishing’s new Progress In Series: Research Highlights website offers quick, accessible summaries of top papers from leading journals like Reports on Progress in Physics and Progress in Energy. Whether you’re short on time or just want the essentials, these highlights help you expand your knowledge of leading topics.

    The post Unlocking the potential of 2D materials: graphene and much more appeared first on Physics World.

    https://physicsworld.com/a/unlocking-the-potential-of-2d-materials-graphene-and-much-more/
    Hamish Johnston

    Ultrasound probe maps real-time blood flow across entire organs

    Multi-lens array probe visualizes the micro-vasculature of entire large organs without compromising image resolution or frame rate

    The post Ultrasound probe maps real-time blood flow across entire organs appeared first on Physics World.

    Microcirculation – the flow of blood through the smallest vessels – is responsible for distributing oxygen and nutrients to tissues and organs throughout the body. Mapping this flow at the whole-organ scale could enhance our understanding of the circulatory system and improve diagnosis of vascular disorders. With this aim, researchers at the Institute Physics for Medicine Paris (Inserm, ESPCI-PSL, CNRS) have combined 3D ultrasound localization microscopy (ULM) with a multi-lens array method to image blood flow dynamics in entire organs with micrometric resolution, reporting their findings in Nature Communications.

    “Beyond understanding how an organ functions across different spatial scales, imaging the vasculature of an entire organ reveals the spatial relationships between macro- and micro-vascular networks, providing a comprehensive assessment of its structural and functional organization,” explains senior author Clement Papadacci.

    The 3D ULM technique works by localizing intravenously injected microbubbles. Offering a spatial resolution roughly ten times finer than conventional ultrasound, 3D ULM can map and quantify micro-scale vascular structures. But while the method has proved valuable for mapping whole organs in small animals, visualizing entire organs in large animals or humans is hindered by the limitations of existing technology.

    To enable wide field-of-view coverage while maintaining high-resolution imaging, the team – led by PhD student Nabil Haidour under Papadacci’s supervision – developed a multi-lens array probe. The probe comprises an array of 252 large (4.5 mm²) ultrasound transducer elements. The use of large elements increases the probe’s sensitive area to a total footprint of 104 x 82 mm, while maintaining a relatively low element count.

    Each transducer element is equipped with an individual acoustic diverging lens. “Large elements alone are too directive to create an image, as they cannot generate sufficient overlap or interference between beams,” Papadacci explains. “The acoustic lenses reduce this directivity, allowing the elements to focus and coherently combine signals in reception, thus enabling volumetric image formation.”

    Whole-organ imaging

    After validating their method via numerical simulations and phantom experiments, the team used a multi-lens array probe driven by a clinical ultrasound system to perform 3D dynamic ULM of an entire explanted porcine heart – considered an ideal cardiac model as its vascular anatomies and dimensions are comparable to those of humans.

    The heart was perfused with microbubble solution, enabling the probe to visualize the whole coronary microcirculation network over a large volume of 120 x 100 x 82 mm, with a spatial resolution of around 125 µm. The technique enabled visualization of both large vessels and the finest microcirculation in real time. The team also used a skeletonization algorithm to measure vessel radii at each voxel, which ranged from approximately 75 to 600 µm.

    As well as structural imaging, the probe can also assess flow dynamics across all vascular scales, with a high temporal resolution of 312 frames/s. By tracking the microbubbles, the researchers estimated absolute flow velocities ranging from 10 mm/s in small vessels to over 300 mm/s in the largest. They could also differentiate arteries and veins based on the flow direction in the coronary network.

    In vivo demonstrations

    Next, the researchers used the multi-lens array probe to image the entire kidney and liver of an anaesthetized pig at the Veterinary school of Maison Alfort, with the probe positioned in front of the kidney or liver, respectively, and held using an articulated arm. They employed electrocardiography to synchronize the ultrasound acquisitions with periods of minimal respiratory motion and injected microbubble solution intravenously into the animal’s ear.

    In vivo imaging of a porcine kidney
    In vivo imaging Left: 3D microbubble density map of the porcine kidney. Centre: 3D flow map of microbubble velocity distribution. Right: 3D flow map showing arterial (red) and venous (blue) flow. (Courtesy: CC BY 4.0/Nat. Commun. 10.1038/s41467-025-64911-z)

    The probe mapped the vascular network of the kidney over a 60 x 80 x 40 mm volume with a spatial resolution of 147 µm. The maximum 3D absolute flow velocity was approximately 280 mm/s in the large vessels and the vessel radii ranged from 70 to 400 µm. The team also used directional flow measurements to identify the arterial and venous flow systems.

    Liver imaging is more challenging due to respiratory, cardiac and stomach motions. Nevertheless, 3D dynamic ULM enabled high-depth visualization of a large volume of liver vasculature (65 x 100 x 82 mm) with a spatial resolution of 200 µm. Here, the researchers used dynamic velocity measurement to identify the liver’s three blood networks (arterial, venous and portal veins).

    “The combination of whole-organ volumetric imaging with high-resolution vascular quantification effectively addresses key limitations of existing modalities, such as ultrasound Doppler imaging, CT angiography and 4D flow MRI,” they write.

    Clinical applications of 3D dynamic ULM still need to be demonstrated, but Papadacci suggests that the technique has strong potential for evaluating kidney transplants, coronary microcirculation disorders, stroke, aneurysms and neoangiogenesis in cancer. “It could also become a powerful tool for monitoring treatment response and vascular remodelling over time,” he adds.

    Papadacci and colleagues anticipate that translation to human applications will be possible in the near future and plan to begin a clinical trial early in 2026.

    The post Ultrasound probe maps real-time blood flow across entire organs appeared first on Physics World.

    https://physicsworld.com/a/ultrasound-probe-maps-real-time-blood-flow-across-entire-organs/
    Tami Freeman

    Inge Lehmann: the ground-breaking seismologist who faced a rocky road to success

    Kate Gardner reviews If I Am Right, and I Know I Am: Inge Lehmann, the Woman Who Discovered Earth’s Innermost Secret by Hanne Strager

    The post Inge Lehmann: the ground-breaking seismologist who faced a rocky road to success appeared first on Physics World.

    Inge Lehmann
    Enigmatic Inge Lehmann around the time she quit her job at Denmark’s Geodetic Institute in 1953. (Courtesy: GEUS)

    In the 1930s a little-known Danish seismologist calculated that the Earth has a solid inner core, within the liquid outer core identified just a decade earlier. The international scientific community welcomed Inge Lehmann as a member of the relatively new field of geophysics – yet in her home country, Lehmann was never really acknowledged as more than a very competent keeper of instruments.

    It was only after retiring from her seismologist job aged 65 that Lehmann was able to devote herself full time to research. For the next 30 years, Lehmann worked and published prolifically, finally receiving awards and plaudits that were well deserved. However, this remarkable scientist, who died in 1993 aged 104, rarely appears in short histories of her field.

    In a step to address this, we now have a biography of Lehmann: If I Am Right, and I Know I Am by Hanne Strager, a Danish biologist, science museum director and science writer. Strager pieces together Lehmann’s life in great detail, as well as providing potted histories of the scientific areas that Lehmann contributed to.

    A brief glance at the chronology of Lehmann’s education and career would suggest that she was a late starter. She was 32 when she graduated with a bachelor’s degree in mathematics from the University of Copenhagen, 40 when she received her master’s degree in geodosy and was appointed state geodesist for Denmark. Lehmann faced a litany of struggles in her younger years, from health problems and money issues to the restrictions placed on most women’s education in the first decades of the 20th century.

    The limits did not come from her family. Lehmann and her sister were sent to good schools, she was encouraged to attend university, and was never pressed to get married, which would likely have meant the end of her education. When she asked her father’s permission to go to the University of Cambridge, his objection was the cost – though the money was found and Lehmann duly went to Newnham College in 1910. While there she passed all the preliminary exams to study for Cambridge’s legendarily tough mathematical tripos but then her health forced her to leave.

    Lehmann was suffering from stomach pains; she had trouble sleeping; her hair was falling out. And this was not her first breakdown. She had previously studied for a year at the University of Copenhagen before then, too, dropping out and moving to the countryside to recover her health.

    The cause of Lehmann’s recurrent breakdowns is unknown. They unfortunately fed into the prevailing view of the time that women were too fragile for the rigours of higher learning. Strager attempts to unpick these historical attitudes from Lehmann’s very real medical issues. She posits that Lehmann had severe anxiety or a physical limitation to how hard she could push herself. But this conclusion fails to address the hostile conditions Lehmann was working in.

    In Cambridge Lehmann formed firm friendships that lasted the rest of her life. But women there did not have the same access to learning as men. They were barred from most libraries and laboratories; could not attend all the lectures; were often mocked and belittled by professors and male students. They could sit exams but, even if they passed, would not be awarded a degree. This was a contributing factor when after the First World War Lehmann decided to complete her undergraduate studies in Copenhagen rather than Cambridge.

    More than meets the eye

    Lehmann is described as quiet, shy, reticent. But she could be eloquent in writing and once her career began she established connections with scientists all over the world by writing to them frequently. She was also not the wallflower she initially appeared to be. When she was hired as an assistant at Denmark’s Institute for the Measurement of Degrees, she quickly complained that she was being using as an office clerk, not a scientist, and she would not have accepted the job had she known this was the role. She was instead given geometry tasks that she found intellectually stimulating, which led her to seismology.

    Unfortunately, soon after this Lehmann’s career development stalled. While her title of “state geodesist” sounds impressive, she was the only seismologist in Denmark for decades, responsible for all the seismographs in Denmark and Greenland. Her days were filled with the practicalities of instrument maintenance and publishing reports of all the data collected.

    Photo of six people and a dog outside a low wooden building in a snowy landscape
    Intrepid Inge Lehmann at the Ittoqqortootmitt (Scoresbysund) seismic station in Greenland c. 1928. A keen hiker, Lehmann was comfortable in cold and remote environments. (Courtesy: GEUS)

    Despite repeated requests Lehmann didn’t receive an assistant, which meant she never got round to completing a PhD, though she did work towards one in her evenings and weekends. Time and again opportunities for career advancement went to men who had the title of doctor but far less real experience in geophysics. Even after she co-founded the Danish Geophysical Society in 1934, her native country overlooked her.

    The breakthrough that should have changed this attitude from the men around her came in 1936, when she published “P’ ”. This innocuous sounding paper was revolutionary, but based firmly in the P wave and S wave measurements that Lehmann routinely monitored.

    In If I Am Right, and I Know I Am, Strager clearly explains what P and S waves are. She also highlights why they were being studied by both state seismologist Lehmann and Cambridge statistician Harold Jeffreys, and how they led to both scientists’ biggest breakthroughs.

    After any seismological disturbance, P and S waves propagate through the Earth. P waves move at different speeds according to the material they encounter, while S waves cannot pass through liquid or air. This knowledge allowed Lehmann to calculate whether any fluctuations in seismograph readings were earthquakes, and if so where the epicentre was located. And it led to Jeffreys’ insight that the Earth must have a liquid core.

    Lehmann’s attention to detail meant she spotted a “discontinuity” in P waves that did not quite match a purely liquid core. She immediately wrote to Jeffreys that she believed there was another layer to the Earth, a solid inner core, but he was dismissive – which led to her writing the statement that forms the title of this book. Undeterred, she published her discovery in the journal of the International Union of Geodesy and Geophysics.

    Home from home

    In 1951 Lehmann visited the institution that would become her second home: the Lamont Geological Observatory in New York state. Its director Maurice Ewing invited her to work there on a sabbatical, arranging all the practicalities of travel and housing on her behalf.

    Here, Lehmann finally had something she had lacked her entire career: friendly collaboration with colleagues who not only took her seriously but also revered her. Lehmann took retirement from her job in Denmark and began to spend months of every year at the Lamont Observatory until well into her 80s.

    Photo of four women in front of a blackboard looking at a table covered with cakes
    Valued colleague A farewell party held for Inge Lehmann in 1954 at Lamont Geological Observatory after one of her research stays. (Courtesy: GEUS)

    Though Strager tells us this “second phase” of Lehmann’s career was prolific, she provides little detail about the work Lehmann did. She initially focused on detecting nuclear tests during the Cold War. But her later work was more varied, and continued after she lost most of her vision. Lehmann published her final paper aged 99.

    If I Am Right, and I Know I Am is bookended with accounts of Strager’s research into one particular letter sent to Lehmann, an anonymous (because the final page has been lost) declaration of love. It’s an insight into the lengths Strager went to – reading all the surviving correspondence to and from Lehmann; interviewing living relatives and colleagues; working with historians both professional and amateur; visiting archives in several countries.

    But for me it hit the wrong tone. The preface and epilogue are mostly speculation about Lehmann’s love life. Lehmann destroyed a lot of her personal correspondence towards the end of her life, and chose what papers to donate to an archive. To me those are the actions of a woman who wants to control the narrative of her life – and does not want her romances to be written about. I would have preferred instead another chapter about her later work, of which we know she was proud.

    But for the majority of its pages, this is a book of which Strager can be proud. I came away from it with great admiration for Lehmann and an appreciation for how lonely life was for many women scientists even in recent history.

    • 2025 Columbia University Press 308 pp, £25hb

    The post Inge Lehmann: the ground-breaking seismologist who faced a rocky road to success appeared first on Physics World.

    https://physicsworld.com/a/inge-lehmann-the-ground-breaking-seismologist-who-faced-a-rocky-road-to-success/
    Kate Gardner

    Rapidly spinning black holes put new limit on ultralight bosons

    Gravitational waves confirm predictions of Einstein and Kerr

    The post Rapidly spinning black holes put new limit on ultralight bosons appeared first on Physics World.

    The LIGO–Virgo–KAGRA collaboration has detected strong evidence for second-generation black holes, which were formed from earlier mergers of smaller black holes. The two gravitational wave signals provide one of the strongest confirmations to date for how Einstein’s general theory of relativity describes rotating black holes. Studying such objects also provides a testbed for probing new physics beyond the Standard Model.

    Over the past decade, the global network of interferometers operated by LIGO, Virgo, and KAGRA have detected close to 300 gravitational waves (GWs) – mostly from the mergers of binary black holes.

    In October 2024 the network detected a clear signal that pointed back to a merger that occurred 700 million light-years away. The progenitor black holes were 20 and 6 solar masses and the larger object was spinning at 370 Hz, which makes it one of the fastest-spinning black holes ever observed.

    Just one month later, the collaboration detected the coalescence of another highly imbalanced binary (17 and 8 solar masses), 2.4 billion light-years away. This signal was even more unusual – showing for the first time that the larger companion was spinning in the opposite direction of the binary orbit.

    Massive and spinning

    While conventional wisdom says black holes should not be spinning at such high rates, the observations were not entirely unexpected. “With both events having one black hole, which is both significantly more massive than the other and rapidly spinning, [the observations] provide tantalizing evidence that these black holes were formed from previous black hole mergers,” explains Stephen Fairhurst at Cardiff University, spokesperson of the LIGO Collaboration. If this were the case, the two GW signals – called GW241011 and GW241110 – are first observations of second-generation black holes. This is because when a binary merges, the resulting second-generation object tends to have a large spin.

    The GW241011 signal was particularly clear, which allowed the team to make the third-ever observation of higher harmonic modes. These are overtones in the GW signal that become far clearer when the masses of the coalescing bodies are highly imbalanced.

    The precision of the GW241011 measurement provides one of the most stringent verifications so far of general relativity. The observations also support Roy Kerr’s prediction that rapid rotation distorts the shape of a black hole.

    Kerr and Einstein confirmed

    “We now know that black holes are shaped like Einstein and Kerr predicted, and general relativity can add two more checkmarks in its list of many successes,” says team member Carl-Johan Haster at the University of Nevada, Las Vegas. “This discovery also means that we’re more sensitive than ever to any new physics that might lie beyond Einstein’s theory.”

    This new physics could include hypothetical particles called ultralight bosons. These could form in clouds just outside the event horizons of spinning black holes, and would gradually drain a black hole’s rotational energy via a quantum effect called superradiance.

    The idea is that the observed second-generation black holes had been spinning for billions of years before their mergers occurred. This means that if ultralight bosons were present, they cannot have removed lots of angular momentum from the black holes. This places the tightest constraint to date on the mass of ultralight bosons.

    “Planned upgrades to the LIGO, Virgo and KAGRA detectors will enable further observations of similar systems,” Fairhurst says. “They will enable us to better understand both the fundamental physics governing these black hole binaries and the astrophysical mechanisms that lead to their formation.”

    Haster adds, “Each new detection provides important insights about the universe, reminding us that each observed merger is both an astrophysical discovery but also an invaluable laboratory for probing the fundamental laws of physics”.

    The observations are described in The Astrophysical Journal Letters.

    The post Rapidly spinning black holes put new limit on ultralight bosons appeared first on Physics World.

    https://physicsworld.com/a/rapidly-spinning-black-holes-put-new-limit-on-ultralight-bosons/
    No Author

    Making quantum computers more reliable

    Using self-testing methods, scientists validate error-correcting codes on photonic and superconducting systems

    The post Making quantum computers more reliable appeared first on Physics World.

    Quantum error correction codes protect quantum information from decoherence and quantum noise, and are therefore crucial to the development of quantum computing and the creation of more reliable and complex quantum algorithms. One example is the five-qubit error correction code, five being the minimum number of qubits required to fix single-qubit errors. These contain five physical qubits (a basic off/on unit of quantum information made using trapped ions, superconducting circuits, or quantum dots) to correct one logical qubit (a collection of physical qubits arranged in such a way as to correct errors). Yet imperfections in the hardware can still lead to quantum errors.

    A method of testing quantum error correction codes is self-testing. Self-testing is a powerful tool for verifying quantum properties using only input-output statistics, treating quantum devices as black boxes. It has evolved from bipartite systems consisting of two quantum subsystems, to multipartite entanglement, where entanglement is among three or more subsystems, and now to genuinely entangled subspaces, where every state is fully entangled across all subsystems. Genuinely entangled subspaces offer stronger, guaranteed entanglement than general multipartite states, making them more reliable for quantum computing and error correction.

    In this research, self-testing techniques are used to certify genuinely entangled logical subspaces within the five-qubit code on photonic and superconducting platforms. This is achieved by preparing informationally complete logical states that span the entire logical space, meaning the set is rich enough to fully characterize the behaviour of the system. They deliberately introduce basic quantum errors by simulating Pauli errors on the physical qubit, which mimics real-world noise. Finally, they use mathematical tests known as Bell inequalities, adapted to the framework used in quantum error correction, to check whether the system evolves in the initial logical subspaces after the errors are introduced.

    Extractability measures tell you how close the tested quantum system is to the ideal target state, with 1 being a perfect match. The certification is supported by extractability measures of at least 0.828 ± 0.006 and 0.621 ± 0.007 for the photonic and superconducting systems, respectively. The photonic platform achieved a high extractability score, meaning the logical subspace was very close to the ideal one. The superconducting platform had a lower score but still showed meaningful entanglement. These scores show that the self-testing method works in practice and confirm strong entanglement in the five-qubit code on both platforms.

    This research contributes to the advancement of quantum technologies by providing robust methods for verifying and characterizing complex quantum structures, which is essential for the development of reliable and scalable quantum systems. It also demonstrates that device-independent certification can extend beyond quantum states and measurements to more general quantum structures.

    Read the full article

    Certification of genuinely entangled subspaces of the five qubit code via robust self-testing

    Yu Guo et al 2025 Rep. Prog. Phys. 88 050501

    Do you want to learn more about this topic?

    Quantum error correction for beginners by Simon J Devitt, William J Munro and Kae Nemoto (2013)

    The post Making quantum computers more reliable appeared first on Physics World.

    https://physicsworld.com/a/making-quantum-computers-more-reliable/
    Lorna Brigham

    Quantum ferromagnets without the usual tricks: a new look at magnetic excitations

    New research reveals how quantum effects in double-exchange ferromagnets drive unexpected magnetic behaviour

    The post Quantum ferromagnets without the usual tricks: a new look at magnetic excitations appeared first on Physics World.

    For almost a century, physicists have tried to understand why and how materials become magnetic. From refrigerator magnets to magnetic memories, the microscopic origins of magnetism remain a surprisingly subtle puzzle — especially in materials where electrons behave both like individual particles and like a collective sea.

    In most transition-metal compounds, magnetism comes from the dance between localized and mobile electrons. Some electrons stay near their home atoms and form tiny magnetic moments (spins), while others roam freely through the crystal. The interaction between these two types of electrons produces “double-exchange” ferromagnetism — the mechanism that gives rise to the rich magnetic behaviour of materials such as manganites, famous for their colossal magnetoresistance (a dramatic change in electrical resistance under a magnetic field). Traditionally, scientists modelled this behaviour by treating the localized spins as classical arrows — big and well-defined, like compass needles. This approximation works well enough for explaining basic ferromagnetism, but experiments over the last few decades have revealed strange features that defy the classical picture. In particular, neutron scattering studies of manganites showed that the collective spin excitations, called magnons, do not behave as expected. Their energy spectrum “softens” (the waves slow down) and their sharp signals blur into fuzzy continua — a sign that the magnons are losing their coherence. Until now, these effects were usually blamed on vibrations of the atomic lattice (phonons) or on complex interactions between charge, spin, and orbital motion.

    2025-november-researchgroup-Herbrych
    Left to right: Adriana Moreo and Elbio Dagotto from University of Tennessee (USA), Takami Tohyama from Tokyo University of Science (Japan), and Marcin Mierzejewski and Jacek Herbrych from Wrocław University of Technology (Courtesy: Herbrych/Wrocław University of Science and Technology)

    A new theoretical study challenges that assumption. By going fully quantum mechanical — treating every localized spin not as a classical arrow but as a true quantum object that can fluctuate, entangle, and superpose — the researchers have reproduced these puzzling experimental observations without invoking phonons at all. Using two powerful model systems (a quantum version of the Kondo lattice and a two-orbital Hubbard model), the team simulated how electrons and spins interact when no semiclassical approximations are allowed. The results reveal a subtle quantum landscape. Instead of a single type of electron excitation, the system hosts two. One behaves like a spinless fermion — a charge carrier stripped of its magnetic identity. The other forms a broad, “incoherent” band of excitations arising from local quantum triplets. These incoherent states sit close to the Fermi level and act as a noisy background — a Stoner-like continuum — that the magnons can scatter off. The result: magnons lose their coherence and energy in just the way experiments observe.

    Perhaps most surprisingly, this mechanism doesn’t rely on the crystal lattice at all. It’s an intrinsic consequence of the quantum nature of the spins themselves. Larger localized spins, such as those in classical manganites, tend to suppress the effect — explaining why decoherence is weaker in some materials than others. Consequently, the implications reach beyond manganites. Similar quantum interplay may occur in iron-based superconductors, ruthenates, and heavy-fermion systems where magnetism and superconductivity coexist. Even in materials without permanent local moments, strong electronic correlations can generate the same kind of quantum magnetism.

    In short, this work uncovers a purely electronic route to complex magnetic dynamics — showing that the quantum personality of the electron alone can mimic effects once thought to require lattice distortions. By uniting electronic structure and spin excitations under a single, fully quantum description, it moves us one step closer to understanding how magnetism truly works in the most intricate materials.

    Read the full article

    Magnon damping and mode softening in quantum double-exchange ferromagnets

    A Moreo et al 2025 Rep. Prog. Phys. 88 068001

    Do you want to learn more about this topic?

    Nanoscale electrodynamics of strongly correlated quantum materials by Mengkun Liu, Aaron J Sternbach and D N Basov (2017)

    The post Quantum ferromagnets without the usual tricks: a new look at magnetic excitations appeared first on Physics World.

    https://physicsworld.com/a/quantum-ferromagnets-without-the-usual-tricks/
    Lorna Brigham

    Fluid-based laser scanning technique could improve brain imaging

    Laser scanning microscopy using a fluid-based prism to steer the light could help researchers learn more about neurological conditions such as Alzheimer’s disease

    The post Fluid-based laser scanning technique could improve brain imaging appeared first on Physics World.

    Using a new type of low-power, compact, fluid-based prism to steer the beam in a laser scanning microscope could transform brain imaging and help researchers learn more about neurological conditions such as Alzheimer’s disease.

    The “electrowetting prism” utilized was developed by a team led by Juliet Gopinath from the electrical, computer and energy engineering and physics departments at the University of Colorado at Boulder (CU Boulder) and Victor Bright from CU Boulder’s mechanical engineering department, as part of their ongoing collaboration on electrically controllable optical elements for improving microscopy techniques.

    “We quickly became interested in biological imaging, and work with a neuroscience group at University of Colorado Denver Anschutz Medical Campus that uses mouse models to study neuroscience,” Gopinath tells Physics World. “Neuroscience is not well understood, as illustrated by the neurodegenerative diseases that don’t have good cures. So a great benefit of this technology is the potential to study, detect and treat neurodegenerative diseases such as Alzheimer’s, Parkinson’s and schizophrenia,” she explains.

    The researchers fabricated their patented electrowetting prism using custom deposition and lithography methods. The device consists of two immiscible liquids housed in a 5 mm tall, 4 mm diameter glass tube, with a dielectric layer on the inner wall coating four independent electrodes. When an electric field is produced by applying a potential difference between a pair of electrodes on opposite sides of the tube, it changes the surface tension and therefore the curvature of the meniscus between the two liquids. Light passing through the device is refracted by a different amount depending on the angle of tilt of the meniscus (as well as on the optical properties of the liquids chosen), enabling beams to be steered by changing the voltage on the electrodes.

    Beam steering for scanning in imaging and microscopy can be achieved via several means, including mechanically controlled mirrors, glass prisms or acousto-optic deflectors (in which a sound wave is used to diffract the light beam). But, unlike the new electrowetting prisms, these methods consume too much power and are not small or lightweight enough to be used for miniature microscopy of neural activity in the brains of living animals.

    In tests detailed in Optics Express, the researchers integrated their electrowetting prism into an existing two-photon laser scanning microscope and successfully imaged individual 5 µm-diameter fluorescent polystyrene beads, as well as large clusters of those beads.

    They also used computer simulation to study how the liquid–liquid interface moved, and found that when a sinusoidal voltage is used for actuation, at 25 and 75 Hz, standing wave resonance modes occur at the meniscus – a result closely matched by a subsequent experiment that showed resonances at 24 and 72 Hz. These resonance modes are important for enhancing device performance since they increase the angle through which the meniscus can tilt and thus enable optical beams to be steered through a greater range of angles, which helps minimize distortions when raster scanning in two dimensions.

    Bright explains that this research built on previous work in which an electrowetting prism was used in a benchtop microscope to image a mouse brain. He cites seeing the individual neurons as a standout moment that, coupled with the current results, shows their prism is now “proven and ready to go”.

    Gopinath and Bright caution that “more work is needed to allow human brain scans, such as limiting voltage requirements, allowing the device to operate at safe voltage levels, and miniaturization of the device to allow faster scan speeds and acquiring images at a much faster rate”. But they add that miniaturization would also make the device useful for endoscopy, robotics, chip-scale atomic clocks and space-based communication between satellites.

    The team has already begun investigating two other potential applications: LiDAR (light detection and ranging) systems and optical coherence tomography (OCT). Next, the researchers “hope to integrate the device into a miniaturized microscope to allow imaging of the brain in freely moving animals in natural outside environments,” they say. “We also aim to improve the packaging of our devices so they can be integrated into many other imaging systems.”

    The post Fluid-based laser scanning technique could improve brain imaging appeared first on Physics World.

    https://physicsworld.com/a/fluid-based-laser-scanning-technique-could-improve-brain-imaging/
    No Author

    Intrigued by quantum? Explore the 2025 Physics World Quantum Briefing 2.0

    Discover our free-to-read 62-page digital magazine now

    The post Intrigued by quantum? Explore the 2025 <em>Physics World Quantum Briefing 2.0</em> appeared first on Physics World.

    To coincide with a week of quantum-related activities organized by the Institute of Physics (IOP) in the UK, Physics World has just published a free-to-read digital magazine to bring you up to date about all the latest developments in the quantum world.

    The 62-page Physics World Quantum Briefing 2.0 celebrates the International Year of Quantum Science and Technology (IYQ) and also looks ahead to a quantum-enhanced future.

    Marking 100 years since the advent of quantum mechanics, IYQ aims to raise awareness of the impact of quantum physics and its myriad future applications, with a global diary of quantum-themed public talks, scientific conferences, industry events and more.

    The 2025 Physics World Quantum Briefing 2.0, which follows on from the first edition published in May, contains yet more quantum topics for you to explore and is once again divided into “history”, “mystery” and “industry”.

    You can find out more about the contributions of Indian physicist Satyendra Nath Bose to quantum science; explore weird phenomena such as causal order and quantum superposition; and discover the latest applications of quantum computing.

    A century after quantum mechanics was first formulated, many physicists are still undecided on some of the most basic foundational questions. There’s no agreement on which interpretation of quantum mechanics holds strong; whether the wavefunction is merely a mathematical tool or a true representation of reality; or what impact an observer has on a quantum state.

    Some of the biggest unanswered questions in physics – such as finding the quantum/classical boundary or reconciling gravity and quantum mechanics – lie at the heart of these conundrums. So as we look to the future of quantum – from its fundamentals to its technological applications – let us hope that some answers to these puzzles will become apparent as we crack the quantum code to our universe.

    • Read the free Physics World Quantum Briefing 2.0 today.

    This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

    Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

    Find out more on our quantum channel.

    The post Intrigued by quantum? Explore the 2025 <em>Physics World Quantum Briefing 2.0</em> appeared first on Physics World.

    https://physicsworld.com/a/intrigued-by-quantum-explore-the-2025-physics-world-quantum-briefing-2-0/
    Tushna Commissariat

    Quantum computing: hype or hope?

    Honor Powrie explores the current status and future potential of quantum computers

    The post Quantum computing: hype or hope? appeared first on Physics World.

    Unless you’ve been living under a stone, you can’t have failed to notice that 2025 marks the first 100 years of quantum mechanics. A massive milestone, to say the least, about which much has been written in Physics World and elsewhere in what is the International Year of Quantum Science and Technology (IYQ). However, I’d like to focus on a specific piece of quantum technology, namely quantum computing.

    I keep hearing about quantum computers, so people must be using them to do cool things, and surely they will soon be as commonplace as classical computers. But as a physicist-turned-engineer working in the aerospace sector, I struggle to get a clear picture of where things are really at. If I ask friends and colleagues when they expect to see quantum computers routinely used in everyday life, I get answers ranging from “in the next two years” to “maybe in my lifetime” or even “never”.

    Before we go any further, it’s worth reminding ourselves that quantum computing relies on several key quantum properties, including superposition, which gives rise to the quantum bit, or qubit. The basic building block of a quantum computer – the qubit – exists as a combination of 0 and 1 states at the same time and is represented by a probabilistic wave function. Classical computers, in contrast, use binary digital bits that are either 0 or 1.

    Also vital for quantum computers is the notion of entanglement, which is when two or more qubits are co-ordinated, allowing them to share their quantum information. In a highly correlated system, a quantum computer can explore many paths simultaneously. This “massive scale” parallel processing is how quantum may solve certain problems exponentially faster than a classical computer.

    The other key phenomenon for quantum computers is quantum interference. The wave-like nature of qubits means that when different probability amplitudes are in phase, they combine constructively to increase the likelihood of the right solution. Conversely, destructive interference occurs when amplitudes are out of phase, making it less likely to get the wrong answer.

    Quantum interference is important in quantum computing because it allows quantum algorithms to amplify the probability of correct answers and suppress incorrect ones, making calculations much faster. Along with superposition and entanglement, it means that quantum computers could process and store vast numbers of probabilities at once, outstripping even the best classical supercomputers.

    Towards real devices

    To me, it all sounds exciting, but what have quantum computers ever done for us so far? It’s clear that quantum computers are not ready to be deployed in the real world. Significant technological challenges need to be overcome before they become fully realisable. In any case, no-one is expecting quantum computers to displace classical computers “like for like”: they’ll both be used for different things.

    Yet it seems that the very essence of quantum computing is also its Achilles heel. Superposition, entanglement and interference – the quantum properties that will make it so powerful – are also incredibly difficult to create and maintain. Qubits are also extremely sensitive to their surroundings. They easily lose their quantum state due to interactions with the environment, whether via stray particles, electromagnetic fields, or thermal fluctuations. Known as decoherence, it makes quantum computers prone to error.

    That’s why quantum computers need specialized – and often cryogenically controlled – environments to maintain the quantum states necessary for accurate computation. Building a quantum system with lots of interconnected qubits is therefore a major, expensive engineering challenge, with complex hardware and extreme operating conditions. Developing “fault-tolerant” quantum hardware and robust error-correction techniques will be essential if we want reliable quantum computation.

    As for the development of software and algorithms for quantum systems, there’s a long way to go, with a lack of mature tools and frameworks. Quantum algorithms require fundamentally different programming paradigms to those used for classical computers. Put simply, that’s why building reliable, real-world deployable quantum computers remains a grand challenge.

    What does the future hold?

    Despite the huge amount of work that still lies in store, quantum computers have already demonstrated some amazing potential. The US firm D-Wave, for example, claimed earlier this year to have carried out simulations of quantum magnetic phase transitions that wouldn’t be possible with the most powerful classical devices. If true, this was the first time a quantum computer had achieved “quantum advantage” for a practical physics problem (whether the problem was worth solving is another question).

    There is also a lot of research and development going on around the world into solving the qubit stability problem. At some stage, there will likely be a breakthrough design for robust and reliable quantum computer architecture. There is probably a lot of technical advancement happening right now behind closed doors.

    The first real-world applications of quantum computers will be akin to the giant classical supercomputers of the past. If you were around in the 1980s, you’ll remember Cray supercomputers: huge, inaccessible beasts owned by large corporations, government agencies and academic institutions to enable vast amounts of calculations to be performed (provided you had the money).

    And, if I believe what I read, quantum computers will not replace classical computers, at least not initially, but work alongside them, as each has its own relative strengths. Quantum computers will be suited for specific and highly demanding computational tasks, such as drug discovery, materials science, financial modelling, complex optimization problems and increasingly large artificial intelligence and machine-learning models.

    These are all things beyond the limits of classical computer resource. Classical computers will remain relevant for everyday tasks like web browsing, word processing and managing databases, and they will be essential for handling the data preparation, visualization and error correction required by quantum systems.

    And there is one final point to mention, which is cyber security. Quantum computing poses a major threat to existing encryption methods, with potential to undermine widely used public-key cryptography. There are concerns that hackers nowadays are storing their stolen data in anticipation of future quantum decryption.

    Having looked into the topic, I can now see why the timeline for quantum computing is so fuzzy and why I got so many different answers when I asked people when the technology would be mainstream. Quite simply, I still can’t predict how or when the tech stack will pan out. But as IYQ draws to a close, the future for quantum computers is bright.

    • More information about the quantum marketplace can be found in the 2025 Physics World Quantum Briefing 2.0 and in a two-part article by Philip Ball (available here and here).

    The post Quantum computing: hype or hope? appeared first on Physics World.

    https://physicsworld.com/a/quantum-computing-hype-or-hope/
    Honor Powrie

    Modular cryogenics platform adapts to new era of practical quantum computing

    With a cube-based design that fits into a standard rack mount, the ICE-Q platform delivers the reliability and scalability needed to exploit quantum systems in real-world operating environments

    The post Modular cryogenics platform adapts to new era of practical quantum computing appeared first on Physics World.

    2025-10-iceoxford-creo-main-image
    Modular and scalable: the ICE-Q cryogenics platform delivers the performance and reliability needed for professional computing environments while also providing a flexible and extendable design. The standard configuration includes a cooling module, a payload with a large sample space, and a side-loading wiring module for scalable connectivity (Courtesy: ICEoxford)

    At the centre of most quantum labs is a large cylindrical cryostat that keeps the delicate quantum hardware at ultralow temperatures. These cryogenic chambers have expanded to accommodate larger and more complex quantum systems, but the scientists and engineers at UK-based cryogenics specialist ICEoxford have taken a radical new approach to the challenge of scalability. They have split the traditional cryostat into a series of cube-shaped modules that slot into a standard 19-inch rack mount, creating an adaptable platform that can easily be deployed alongside conventional computing infrastructure.

    “We wanted to create a robust, modular and scalable solution that enables different quantum technologies to be integrated into the cryostat,” says Greg Graf, the company’s engineering manager. “This approach offers much more flexibility, because it allows different modules to be used for different applications, while the system also delivers the efficiency and reliability that are needed for operational use.”

    The standard configuration of the ICE-Q platform has three separate modules: a cryogenics unit that provides the cooling power, a large payload for housing the quantum chip or experiment, and a patent-pending wiring module that attaches to the side of the payload to provide the connections to the outside world. Up to four of these side-loading wiring modules can be bolted onto the payload at the same time, providing thousands of external connections while still fitting into a standard rack. For applications where space is not such an issue, the payload can be further extended to accommodate larger quantum assemblies and potentially tens of thousands of radio-frequency or fibre-optic connections.

    The cube-shaped form factor provides much improved access to these external connections, whether for designing and configuring the system or for ongoing maintenance work. The outer shell of each module consists of panels that are easily removed, offering a simple mechanism for bolting modules together or stacking them on top of each other to provide a fully scalable solution that grows with the qubit count.

    The flexible design also offers a more practical solution for servicing or upgrading an installed system, since individual modules can be simply swapped over as and when needed. “For quantum computers running in an operational environment it is really important to minimize the downtime,” says Emma Yeatman, senior design engineer at ICEoxford. “With this design we can easily remove one of the modules for servicing, and replace it with another one to keep the system running for longer. For critical infrastructure devices, it is possible to have built-in redundancy that ensures uninterrupted operation in the event of a failure.”

    Other features have been integrated into the platform to make it simple to operate, including a new software system for controlling and monitoring the ultracold environment. “Most of our cryostats have been designed for researchers who really want to get involved and adapt the system to meet their needs,” adds Yeatman. “This platform offers more options for people who want an out-of-the-box solution and who don’t want to get hands on with the cryogenics.”

    Such a bold design choice was enabled in part by a collaborative research project with Canadian company Photonic Inc, funded jointly by the UK and Canada, that was focused on developing an efficient and reliable cryogenics platform for practical quantum computing. That R&D funding helped to reduce the risk of developing an entirely new technology platform that addresses many of the challenges that ICEoxford and its customers had experienced with traditional cryostats. “Quantum technologies typically need a lot of wiring, and access had become a real issue,” says Yeatman. “We knew there was an opportunity to do better.”

    However, converting a large cylindrical cryostat into a slimline and modular form factor demanded some clever engineering solutions. Perhaps the most obvious was creating a frame that allows the modules to be bolted together while still remaining leak tight. Traditional cryostats are welded together to ensure a leak-proof seal, but for greater flexibility the ICEoxford team developed an assembly technique based on mechanical bonding.

    The side-loading wiring module also presented a design challenge. To squeeze more wires into the available space, the team developed a high-density connector for the coaxial cables to plug into. An additional cold-head was also integrated into the module to pre-cool the cables, reducing the overall heat load generated by such large numbers of connections entering the ultracold environment.

    2025-10-iceoxford-image-a4-system-render
    Flexible for the future: the outer shell of the modules is covered with removable panels that make it easy to extend or reconfigure the system (Courtesy: ICEoxford)

    Meanwhile, the speed of the cooldown and the efficiency of operation have been optimized by designing a new type of heat exchanger that is fabricated using a 3D printing process. “When warm gas is returned into the system, a certain amount of cooling power is needed just to compress and liquefy that gas,” explains Kelly. “We designed the heat exchangers to exploit the returning cold gas much more efficiently, which enables us to pre-cool the warm gas and use less energy for the liquefaction.”

    The initial prototype has been designed to operate at 1 K, which is ideal for the photonics-based quantum systems being developed by ICEoxford’s research partner. But the modular nature of the platform allows it to be adapted to diverse applications, with a second project now underway with the Rutherford Appleton Lab to develop a module that that will be used at the forefront of the global hunt for dark matter.

    Already on the development roadmap are modules that can sustain temperatures as low as 10 mK – which is typically needed for superconducting quantum computing – and a 4 K option for trapped-ion systems. “We already have products for each of those applications, but our aim was to create a modular platform that can be extended and developed to address the changing needs of quantum developers,” says Kelly.

    As these different options come onstream, the ICEoxford team believes that it will become easier and quicker to deliver high-performance cryogenic systems that are tailored to the needs of each customer. “It normally takes between six and twelve months to build a complex cryogenics system,” says Graf. “With this modular design we will be able to keep some of the components on the shelf, which would allow us to reduce the lead time by several months.”

    More generally, the modular and scalable platform could be a game-changer for commercial organizations that want to exploit quantum computing in their day-to-day operations, as well as for researchers who are pushing the boundaries of cryogenics design with increasingly demanding specifications. “This system introduces new avenues for hardware development that were previously constrained by the existing cryogenics infrastructure,” says Kelly. “The ICE-Q platform directly addresses the need for colder base temperatures, larger sample spaces, higher cooling powers, and increased connectivity, and ensures our clients can continue their aggressive scaling efforts without being bottlenecked by their cooling environment.”

    • You can find out more about the ICE-Q platform by contacting the ICEoxford team at iceoxford.com, or via email at sales@iceoxford.com. They will also be presenting the platform at the UK’s National Quantum Technologies Showcase in London on 7 November, with a further launch at the American Physical Society meeting in March 2026.

    The post Modular cryogenics platform adapts to new era of practical quantum computing appeared first on Physics World.

    https://physicsworld.com/a/modular-cryogenics-platform-adapts-to-new-era-of-practical-quantum-computing/
    No Author

    Portable source could produce high-energy muon beams

    Research could lead to ultracompact muon sources for applications such as tomography

    The post Portable source could produce high-energy muon beams appeared first on Physics World.

    Due to government shutdown restrictions currently in place in the US, the researchers who headed up this study have not been able to comment on their work

    Laser plasma acceleration (LPA) may be used to generate multi-gigaelectronvolt muon beams, according to physicists at the Lawrence Berkeley National Laboratory (LBNL) in the US. Their work might help in the development of ultracompact muon sources for applications such as muon tomography – which images the interior of large objects that are inaccessible to X-ray radiography.

    Muons are charged subatomic particles that are produced in large quantities when cosmic rays collide with atoms 15–20 km high up in the atmosphere. Muons have the same properties as electrons but are around 200 times heavier. This means they can travel much further through solid structures than electrons. This property is exploited in muon tomography, which analyses how muons penetrate objects and then exploits this information to produce 3D images.

    The technique is similar to X-ray tomography used in medical imaging, with the cosmic-ray radiation taking the place of artificially generated X-rays and muon trackers the place of X-ray detectors. Indeed, depending on their energy, muons can traverse metres of rock or other materials, making them ideal for imaging thick and large structures. As a result, the technique has been used to peer inside nuclear reactors, pyramids and volcanoes.

    As many as 10,000 muons from cosmic rays reach each square metre of the Earth’s surface every minute. These naturally produced particles have unpredictable properties, however, and they also only come from the vertical direction. This fixed directionality means that can take months to accumulate enough data for tomography.

    Another option is to use the large numbers of low-energy muons that can be produced in proton accelerator facilities by smashing a proton beam onto a fixed carbon target. However, these accelerators are large and expensive facilities, limiting their use in muon tomography.

    A new compact source

    Physicists led by Davide Terzani have now developed a new compact muon source based on LPA-generated electron beams. Such a source, if optimized, could be deployed in the field and could even produce muon beams in specific directions.

    In LPA, an ultra-intense, ultra-short, and tightly focused laser pulse propagates into an “under-dense” gas. The pulse’s extremely high electric field ionizes the gas atoms, freeing the electrons from the nuclei, so generating a plasma. The ponderomotive force, or radiation pressure, of the intense laser pulse displaces these electrons and creates an electrostatic wave that produces accelerating fields orders of magnitude higher than what is possible in the traditional radio-frequency cavities used in conventional accelerators.

    LPAs have all the advantages of an ultra-compact electron accelerator that allows for muon production in a small-size facility such as BeLLA, where Terzani and his colleagues work. Indeed, in their experiment, they succeeded in generating a 10 GeV electron beam in a 30 cm gas target for the first time.

    The researchers collided this beam with a dense target, such as tungsten. This slows the beam down so that it emits Bremsstrahlung, or braking radiation, which interacts with the material, producing secondary products that include lepton–antilepton pairs, such as electron–positron and muon–antimuon pairs. Behind the converter target, there is also a short-lived burst of muons that propagates roughly along the same axis as the incoming electron beam. A thick concrete shielding then filters most of the secondary products, letting the majority of muons pass through it.

    Crucially, Terzani and colleagues were able to separate the muon signal from the large background radiation – something that can be difficult to do because of the inherent inefficiency of the muon production process. This allowed them to identify two different muon populations coming from the accelerator. These were a collimated, forward directed population, generated by pair production; and a low-energy, isotropic, population generated by meson decay.

    Many applications

    Muons can ne used in a range of fields, from imaging to fundamental particle physics. As mentioned, muons from cosmic rays are currently used to inspect large and thick objects not accessible to regular X-ray radiography – a recent example of this is the discovery of a hidden chamber in Khufu’s Pyramid. They can also be used to image the core of a burning blast furnace or nuclear waste storage facilities.

    While the new LPA-based technique cannot yet produce muon fluxes suitable for particle physics experiments – to replace a muon injector, for example – it could offer the accelerator community a convenient way to test and develop essential elements towards making a future muon collider.

    The experiment in this study, which is detailed in Physical Review Accelerators and Beams, focused on detecting the passage of muons, unequivocally proving their signature. The researchers conclude that they now have a much better understanding of the source of these muons.

    Unfortunately, the original programme that funded this research has ended, so future studies are limited at the moment. Not to be disheartened, the researchers say they strongly believe in the potential of LPA-generated muons and are working on resuming some of their experiments. For example, they aim to measure the flux and the spectrum of the resulting muon beam using completely different detection techniques based on ultra-fast particle trackers, for example.

    The LBNL team also wants to explore different applications, such as imaging deep ore deposits – something that will be quite challenging because it poses strict limitations on the minimum muon energy required to penetrate soil. Therefore, they are looking into how to increase the muon energy of their source.

    The post Portable source could produce high-energy muon beams appeared first on Physics World.

    https://physicsworld.com/a/portable-source-could-produce-high-energy-muon-beams/
    Isabelle Dumé

    Quantum computing on the verge: correcting errors, developing algorithms and building up the user base

    Philip Ball dives into the challenges in developing quantum computing, and building up investments and users for the tech

    The post Quantum computing on the verge: correcting errors, developing algorithms and building up the user base appeared first on Physics World.

    When it comes to building a fully functional “fault-tolerant” quantum computer, companies and government labs all over the world are rushing to be the first over the finish line. But a truly useful universal quantum computer capable of running complex algorithms would have to entangle millions of coherent qubits, which are extremely fragile. Because of environmental factors such as temperature, interference from other electronic systems in hardware, and even errors in measurement, today’s devices would fail under an avalanche of errors long before reaching that point.

    So the problem of error correction is a key issue for the future of the market. It arises because errors in qubits can’t be corrected simply by keeping multiple copies, as they are in classical computers: quantum rules forbid the copying of qubit states while they are still entangled with others, and are thus unknown. To run quantum circuits with millions of gates, we therefore need new tricks to enable quantum error correction (QEC).

    Protected states

    The general principle of QEC is to spread the information over many qubits so that an error in any one of them doesn’t matter too much. “The essential idea of quantum error correction is that if we want to protect a quantum system from damage then we should encode it in a very highly entangled state,” says John Preskill, director of the Institute for Quantum Information and Matter at the California Institute of Technology in Pasadena.

    There is no unique way of achieving that spreading, however. Different error-correcting codes can depend on the connectivity between qubits – whether, say, they are coupled only to their nearest neighbours or to all the others in the device – which tends to be determined by the physical platform being used. However error correction is done, it must be done fast. “The mechanisms for error correction need to be running at a speed that is commensurate with that of the gate operations,” says Michael Cuthbert, founding director of the UK’s National Quantum Computing Centre (NQCC). “There’s no point in doing a gate operation in a nanosecond if it then takes 100 microseconds to do the error correction for the next gate operation.”

    At the moment, dealing with errors is largely about compensation rather than correction: patching up the problems of errors in retrospect, for example by using algorithms that can throw out some results that are likely to be unreliable (an approach called “post-selection”). It’s also a matter of making better qubits that are less error-prone in the first place.

    1 From many to few

    Turning unreliable physical qubits into a logical qubit
    (Courtesy: Riverlane via www.riverlane.com)

    Qubits are so fragile that their quantum state is very susceptible to the local environment, and can easily be lost through the process of decoherence. Current quantum computers therefore have very high error rates – roughly one error in every few hundred operations. For quantum computers to be truly useful, this error rate will have to be reduced to the scale of one in a million; especially as larger more complex algorithms would require one in a billion or even trillion error rates. This requires real-time quantum error correction (QEC).

    To protect the information stored in qubits, a multitude of unreliable physical qubits have to be combined in such a way that if one qubit fails and causes an error, the others can help protect the system. Essentially, by combining many physical qubits (shown above on the left), one can build a few “logical” qubits that are strongly resistant to noise.

    According to Maria Maragkou, commercial vice-president of quantum error-correction company Riverlane, the goal of full QEC has ramifications for the design of the machines all the way from hardware to workflow planning. “The shift to support error correction has a profound effect on the way quantum processors themselves are built, the way we control and operate them, through a robust software stack on top of which the applications can be run,” she explains. The “stack” includes everything from programming languages to user interfaces and servers.

    With genuinely fault-tolerant qubits, errors can be kept under control and prevented from proliferating during a computation. Such qubits might be made in principle by combining many physical qubits into a single “logical qubit” in which errors can be corrected (see figure 1). In practice, though, this creates a large overhead: huge numbers of physical qubits might be needed to make just a few fault-tolerant logical qubits. The question is then whether errors in all those physical qubits can be checked faster than they accumulate (see figure 2).

    That overhead has been steadily reduced over the past several years, and at the end of last year researchers at Google announced that their 105-qubit Willow quantum chip passed the break-even threshold at which the error rate gets smaller, rather than larger, as more physical qubits are used to make a logical qubit. This means that in principle such arrays could be scaled up without errors accumulating.

    2 Error correction in action

    Illustration of the error correction cycle
    (Courtesy: Riverlane via www.riverlane.com)

    The illustration gives an overview of quantum error correction (QEC) in action within a quantum processing unit. UK-based company Riverlane is building its Deltaflow QEC stack that will correct millions of data errors in real time, allowing a quantum computer to go beyond the reach of any classical supercomputer.

    Fault-tolerant quantum computing is the ultimate goal, says Jay Gambetta, director of IBM research at the company’s centre in Yorktown Heights, New York. He believes that to perform truly transformative quantum calculations, the system must go beyond demonstrating a few logical qubits – instead, you need arrays of at least a 100 of them, that can perform more than 100 million quantum operations (108 QuOps). “The number of operations is the most important thing,” he says.

    It sounds like a tall order, but Gambetta is confident that IBM will achieve these figures by 2029. By building on what has been achieved so far with error correction and mitigation, he feels “more confident than I ever did before that we can achieve a fault-tolerant computer.” Jerry Chow, previous manager of the Experimental Quantum Computing group at IBM, shares that optimism. “We have a real blueprint for how we can build [such a machine] by 2029,” he says (see figure 3).

    Others suspect the breakthrough threshold may be a little lower: Steve Brierley, chief executive of Riverlane, believes that the first error-corrected quantum computer, with around 10 000 physical qubits supporting 100 logical qubits and capable of a million QuOps (a megaQuOp), could come as soon as 2027. Following on, gigaQuOp machines (109 QuOps) should be available by 2030–32, and teraQuOps (1012 QuOp) by 2035–37.

    Platform independent

    Error mitigation and error correction are just two of the challenges for developers of quantum software. Fundamentally, to develop a truly quantum algorithm involves taking full advantage of the key quantum-mechanical properties such as superposition and entanglement. Often, the best way to do that depends on the hardware used to run the algorithm. But ultimately the goal will be to make software that is not platform-dependent and so doesn’t require the user to think about the physics involved.

    “At the moment, a lot of the platforms require you to come right down into the quantum physics, which is a necessity to maximize performance,” says Richard Murray of photonic quantum-computing company Orca. Try to generalize an algorithm by abstracting away from the physics and you’ll usually lower the efficiency with which it runs. “But no user wants to talk about quantum physics when they’re trying to do machine learning or something,” Murray adds. He believes that ultimately it will be possible for quantum software developers to hide those details from users – but Brierley thinks this will require fault-tolerant machines.

    “In due time everything below the logical circuit will be a black box to the app developers”, adds Maragkou over at Riverlane. “They will not need to know what kind of error correction is used, what type of qubits are used, and so on.” She stresses that creating truly efficient and useful machines depends on developing the requisite skills. “We need to scale up the workforce to develop better qubits, better error-correction codes and decoders, write the software that can elevate those machines and solve meaningful problems in a way that they can be adopted.” Such skills won’t come only from quantum physicists, she adds: “I would dare say it’s mostly not!”

    Yet even now, working on quantum software doesn’t demand a deep expertise in quantum theory. “You can be someone working in quantum computing and solving problems without having a traditional physics training and knowing about the energy levels of the hydrogen atom and so on,” says Ashley Montanaro, who co-founded the quantum software company Phasecraft.

    On the other hand, insights can flow in the other direction too: working on quantum algorithms can lead to new physics. “Quantum computing and quantum information are really pushing the boundaries of what we think of as quantum mechanics today,” says Montanaro, adding that QEC “has produced amazing physics breakthroughs.”

    Early adopters?

    Once we have true error correction, Cuthbert at the UK’s NQCC expects to see “a flow of high-value commercial uses” for quantum computers. What might those be?

    In this arena of quantum chemistry and materials science, genuine quantum advantage – calculating something that is impossible using classical methods alone – is more or less here already, says Chow. Crucially, however, quantum methods needn’t be used for the entire simulation but can be added to classical ones to give them a boost for particular parts of the problem.

    IBM and RIKEN quantum systems
    Joint effort In June 2025, IBM in the US and Japan’s national research laboratory RIKEN, unveiled the IBM Quantum System Two, the first to be used outside the US. It involved IBM’s 156-qubit IBM Heron quantum computing system (left) being paired with RIKEN’s supercomputer Fugaku (right) — one of the most powerful classical systems on Earth. The computers are linked through a high-speed network at the fundamental instruction level to form a proving ground for quantum-centric supercomputing. (Courtesy: IBM and RIKEN)

    For example, last year researchers at IBM teamed up with scientists at several RIKEN institutes in Japan to calculate the minimum energy state for the iron sulphide cluster (4Fe-4S) at the heart of the bacterial nitrogenase enzyme that fixes nitrogen. This cluster is too big and complex to be accurately simulated using the classical approximations of quantum chemistry. The researchers used a combination of both quantum computing (with IBM’s 72-qubit Heron chip) and RIKEN’s Fugaku high performance computing (HPC). This idea of “improving classical methods by injecting quantum as a subroutine” is likely to be a more general strategy, says Gambetta. “The future of computing is going to be heterogeneous accelerators [of discovery] that include quantum.”

    Likewise, Montanaro says that Phasecraft is developing “quantum-enhanced algorithms”, where a quantum computer is used, not to solve the whole problem, but just to help a classical computer in some way. “There are only certain problems where we know quantum computing is going to be useful,” he says. “I think we are going to see quantum computers working in tandem with classical computers in a hybrid approach. I don’t think we’ll ever see workloads that are entirely run using a quantum computer.” Among the first important problems that quantum machines will solve, according to Montanaro, are the simulation of new materials – to develop, for example, clean-energy technologies (see figure 4).

    “For a physicist like me,” says Preskill, “what is really exciting about quantum computing is that we have good reason to believe that a quantum computer would be able to efficiently simulate any process that occurs in nature.”

    3 Structural insights

    Modelling materials using quantum computing
    (Courtesy: Phasecraft)

    A promising application of quantum computers is simulating novel materials. Researchers from the quantum algorithms firm Phasecraft, for example, have already shown how a quantum computer could help simulate complex materials such as the polycrystalline compound LK-99, which was purported by some researchers in 2024 to be a room-temperature superconductor.

    Using a classical/quantum hybrid workflow, together with the firm’s proprietary material simulation approach to encode and compile materials on quantum hardware, Phasecraft researchers were able to establish a classical model of the LK99 structure that allowed them to extract an approximate representation of the electrons within the material. The illustration above shows the green and blue electronic structure around red and grey atoms in LK-99.

    Montanaro believes another likely near-term goal for useful quantum computing is solving optimization problems – both here and in quantum simulation, “we think genuine value can be delivered already in this NISQ era with hundreds of qubits.” (NISQ, a term coined by Preskill, refers to noisy intermediate-scale quantum computing, with relatively small numbers of rather noisy, error-prone qubits.)

    One further potential benefit of quantum computing is that it tends to require less energy than classical high-performance computing, which is notoriously high. If the energy cost could be cut by even a few percent, it would be worth using quantum resources for that reason alone. “Quantum has real potential for an energy advantage,” says Chow. One study in 2020 showed that a particular quantum-mechanical calculation carried out on a HPC used many orders of magnitude more energy than when it was simulated on a quantum circuit. Such comparisons are not easy, however, in the absence of an agreed and well-defined metric for energy consumption.

    Building the market

    Right now, the quantum computing market is in a curious superposition of states itself – it has ample proof of principle, but today’s devices are still some way from being able to perform a computation relevant to a practical problem that could not be done with classical computers. Yet to get to that point, the field needs plenty of investment.

    The fact that quantum computers, especially if used with HPC, are already unique scientific tools should establish their value in the immediate term, says Gambetta. “I think this is going to accelerate, and will keep the funding going.” It is why IBM is focusing on utility-scale systems of around 100 qubits or so and more than a thousand gate operations, he says, rather than simply trying to build ever bigger devices.

    Montanaro sees a role for governments to boost the growth of the industry “where it’s not the right fit for the private sector”. One role of government is simply as a customer. For example, Phasecraft is working with the UK national grid to develop a quantum algorithm for optimizing the energy network. “Longer-term support for academic research is absolutely critical,” Montanaro adds. “It would be a mistake to think that everything is done in terms of the underpinning science, and governments should continue to support blue-skies research.”

    IBM roadmap of quantum development
    The road ahead IBM’s current roadmap charts how the company plans on scaling up its devices to achieve a fault-tolerant device by 2029. Alongside hardware development, the firm will also focus on developing new algorithms and software for these devices. (Courtesy: IBM)

    It’s not clear, though, whether there will be a big demand for quantum machines that every user will own and run. Before 2010, “there was an expectation that banks and government departments would all want their own machine – the market would look a bit like HPC,” Cuthbert says. But that demand depends in part on what commercial machines end up being like. “If it’s going to need a premises the size of a football field, with a power station next to it, that becomes the kind of infrastructure that you only want to build nationally.” Even for smaller machines, users are likely to try them first on the cloud before committing to installing one in-house.

    According to Cuthbert , the real challenge in the supply-chain development is that many of today’s technologies were developed for the science community – where, say, achieving millikelvin cooling or using high-power lasers is routine. “How do you go from a specialist scientific clientele to something that starts to look like a washing machine factory, where you can make them to a certain level of performance,” while also being much cheaper, and easier to use?

    But Cuthbert is optimistic about bridging this gap to get to commercially useful machines, encouraged in part by looking back at the classical computing industry of the 1970s. “The architects of those systems could not imagine what we would use our computation resources for today. So I don’t think we should be too discouraged that you can grow an industry when we don’t know what it’ll do in five years’ time.”

    Montanaro too sees analogies with those early days of classical computing. “If you think what the computer industry looked like in the 1940s, it’s very different from even 20 years later. But there are some parallels. There are companies that are filling each of the different niches we saw previously, there are some that are specializing in quantum hardware development, there are some that are just doing software.” Cuthbert thinks that the quantum industry is likely to follow a similar pathway, “but more quickly and leading to greater market consolidation more rapidly.”

    However, while the classical computing industry was revolutionized by the advent of personal computing in the 1970s and 80s, it seems very unlikely that we will have any need for quantum laptops. Rather, we might increasingly see apps and services appear that use cloud-based quantum resources for particular operations, merging so seamlessly with classical computing that we don’t even notice.

    That, perhaps, would be the ultimate sign of success: that quantum computing becomes invisible, no big deal but just a part of how our answers are delivered.

    • In the first instalment of this two-part article, Philip Ball explores the latest developments in the quantum-computing industry

    This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

    Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

    Find out more on our quantum channel.

    The post Quantum computing on the verge: correcting errors, developing algorithms and building up the user base appeared first on Physics World.

    https://physicsworld.com/a/quantum-computing-on-the-verge-correcting-errors-developing-algorithms-and-building-up-the-user-base/
    No Author

    Young rogue planet grows like a star

    Telescope observations reveal material accreting onto the planet Cha1107-7626 in a way that resembles the infancy of stars

    The post Young rogue planet grows like a star appeared first on Physics World.

    When a star rapidly accumulates gas and dust during its early growth phase, it’s called an accretion burst. Now, for the first time, astronomers have observed a planet doing the same thing. The discovery, made using the European Southern Observatory’s Very Large Telescope (VLT) and the James Webb Space Telescope (JWST), shows that the infancy of certain planetary-mass objects and that of newborn stars may share similar characteristics.

    In their study, which is detailed in The Astrophysical Journal Letters, astronomers led by Víctor Almendros-Abad at Italy’s Palermo Astronomical Observatory; Ray Jayawardhana of Johns Hopkins University in the US; and Belinda Damian and Aleks Scholz of the University of St Andrews, UK, focused on a planet known as Cha1107-7626. Located around 620 light-years from Earth, this planet has a mass approximately five to 10 times that of Jupiter. Unlike Jupiter, though, it does not orbit around a central star. Instead, it floats freely in space as a “rogue” planet, one of many identified in recent years.

    An accretion burst in Cha1107-7626

    Like other rogue planets, Cha1107-7626 was known to be surrounded by a disk of dust and gas. When material from this disk spirals, or accretes, onto the planet, the planet grows.

    What Almendros-Abad and colleagues discovered is that this process is not uniform. Using the VLT’s XSHOOTER and the NIRSpec and MIRI instruments on JWST, they found that Cha1107-7626 experienced a burst of accretion beginning in June 2025. This is the first time anyone has seen an accretion burst in an object with such a low mass, and the peak accretion rate of six billion tonnes per second makes it the strongest accretion episode ever recorded in a planetary-mass object. It may not be over, either. At the end of August, when the observing campaign ended, the burst was still ongoing.

    An infancy similar to a star’s

    The team identified several parallels between Cha1107-7626’s accretion burst and those that young stars experience. Among them were clear signs that gas is being funnelled onto the planet. “This indicates that magnetic fields structure the flow of gas, which is again something well known from stars,” explains Scholz. “Overall, our discovery is establishing interesting, perhaps surprising parallels between stars and planets, which I’m not sure we fully understand yet.”

    The astronomers also found that the chemistry of the disc around the planet changed during accretion, with water being present in this phase even though it hadn’t been before. This effect has previously been spotted in stars, but never in a planet until now.

    “We’re struck by quite how much the infancy of free-floating planetary-mass objects resembles that of stars like the Sun,” Jayawardhana says. “Our new findings underscore that similarity and imply that some objects comparable to giant planets form the way stars do, from contracting clouds of gas and dust accompanied by disks of their own, and they go through growth episodes just like newborn stars.”

    The researchers have been studying similar objects for many years and earlier this year published results based on JWST observations that featured a small sample of planetary-mass objects. “This particular study is part of that sample,” Scholz tells Physics World, “and we obtained the present results because Victor wanted to look in detail at the accretion flow onto Cha1107-7626, and in the process discovered the burst.”

    The researchers say they are “keeping an eye” on Cha1107-7626 and other such objects that are still growing because their environment is dynamic and unstable. “More to the point, we really don’t understand what drives these accretion events, and we need detailed follow-up to figure out the underlying reasons for these processes,” Scholz says.

    The post Young rogue planet grows like a star appeared first on Physics World.

    https://physicsworld.com/a/young-rogue-planet-grows-like-a-star/
    Isabelle Dumé

    Spooky physics: from glowing green bats to vibrating spider webs

    For Halloween, a couple of spooky stories from the world of physics

    The post Spooky physics: from glowing green bats to vibrating spider webs appeared first on Physics World.

    It’s Halloween today and so what better time than to bring you a couple of spooky stories from the world of physics.

    First up is researchers at the University of Georgia in the US who have confirmed that six different species of bats found in North America emit a ghoulish green light when exposed to ultraviolet light.

    The researchers examined 60 specimens from the Georgia Museum of Natural History and exposed the bats to UV light.

    They found that the wings and hind limbs of six species – big brown bats, eastern red bats, Seminole bats, southeastern myotis, grey bats and the Brazilian free-tailed bat – gave off photoluminescence with the resulting glow being a shade of green.

    While previous research found that some mammals, like pocket gophers, also emit a glow under ultraviolet light, this was the first discovery of such a phenomenon for bats located in North America.

    The colour and location of the glow on the winged mammals suggest it is not down to genetics or camouflage and as it is the same between sexes it is probably not used to attract mates.

    “It may not seem like this has a whole lot of consequence, but we’re trying to understand why these animals glow,” notes wildlife biologist Steven Castleberry from the University of Georgia.

    Given that many bats can see the wavelengths emitted, one option is that the glow may be an inherited trait used for communication.

    “The data suggests that all these species of bats got it from a common ancestor. They didn’t come about this independently,” adds Castleberry. “It may be an artifact now, since maybe glowing served a function somewhere in the evolutionary past, and it doesn’t anymore.”

    Thread lightly

    In other frightful news, spider webs are a classic Halloween decoration and while the real things are marvels of bioengineering, there is still more to understand about these sticky structures.

    Many spider species build spiral wheel-shaped webs – orb webs – to capture prey, and some incorporate so-called “stabilimenta” into their web structure. These “extra touches” look like zig-zagging threads that span the gap between two adjacent “spokes,” or threads arranged in a circular “platform” around the web’s centre.

    The purpose of stabilimenta is unknown and proposed functions include as a deterrence for predatory wasps or birds.

    Yet Gabriele Greco of the Swedish University of Agricultural Sciences and colleagues suggest such structures might instead influence the propagation of web vibrations triggered by the impact of captured prey.

    Greco and colleagues observed different stabilimentum geometries that were constructed by wasp spiders, Argiope bruennichi. The researchers then performed numerical simulations to explore how stabilimenta affect prey impact vibrations.

    For waves generated at angles perpendicular to the threads spiralling out from the web centre, stabilimenta caused negligible delays in wave propagation.

    However, for waves generated in the same direction as the spiral threads, vibrations in webs with stabilimenta propagated to a greater number of potential detection points across the web – where a spider might sense them – than in webs without stabilimenta.

    This suggests that stabilimenta may boost a spider’s ability to pinpoint the location of unsuspecting prey caught in its web.

    Spooky.

    The post Spooky physics: from glowing green bats to vibrating spider webs appeared first on Physics World.

    https://physicsworld.com/a/spooky-physics-from-glowing-green-bats-to-vibrating-spider-webs/
    Michael Banks

    Lowering exam stakes could cut the gender grade gap in physics, finds study

    The gender gap in exam results was much larger in the high-stakes classes that did not allow retakes

    The post Lowering exam stakes could cut the gender grade gap in physics, finds study appeared first on Physics World.

    Female university students do much better in introductory physics exams if they have the option of retaking the tests. That’s according to a new analysis of almost two decades of US exam results for more than 26,000 students. The study’s authors say it shows that female students benefit from lower-stakes assessments – and that the persistent “gender grade gap” in physics exam results does not reflect a gender difference in physics knowledge or ability.

    The study has been carried out by David Webb from the University of California, Davis, and Cassandra Paul from San Jose State University. It builds on previous work they did in 2023, which showed that the gender gap disappears in introductory physics classes that offer the chance for all students to retake the exams. That study did not, however, explore why the offer of a retake has such an impact.

    In the new study, the duo analysed exam results from 1997 to 2015 for a series of introductory physics classes at a public university in the US. The dataset included 26,783 students, mostly in biosciences, of whom about 60% were female. Some of the classes let students retake exams while others did not, thereby letting the researchers explore why retakes close the gender gap.

    When Webb and Paul examined the data for classes that offered retakes, they found that in first-attempt exams female students slightly outperformed their male counterparts. But male students performed better than female students in retakes.

    This, the researchers argue, discounts the notion that retakes close the gender gap by allowing female students to improve their grades. Instead, they suggest that the benefit of retakes is that they lower the stakes of the first exam.

    The team then compared the classes that offered retakes with those that did not, which they called high-stakes courses. They found that the gender gap in exam results was much larger in the high-stakes classes than the lower-stakes classes that allowed retakes.

    “This suggests that high-stakes exams give a benefit to men, on average, [and] lowering the stakes of each exam can remove that bias,” Webb told Physics World. He thinks that as well as allowing students to retake exams, physics might benefit from not having comprehensive high-stakes final exams but instead “use final exam time to let students retake earlier exams”.

    The post Lowering exam stakes could cut the gender grade gap in physics, finds study appeared first on Physics World.

    https://physicsworld.com/a/lowering-exam-stakes-could-cut-the-gender-grade-gap-in-physics-finds-study/
    No Author

    Quantum steampunk: we explore the art and science

    Our podcast guests are a physicist and a sculptor

    The post Quantum steampunk: we explore the art and science appeared first on Physics World.

    Earlier this year I met the Massachusetts-based steampunk artist Bruce Rosenbaum at the Global Physics Summit of the American Physical Society. He was exhibiting a beautiful sculpture of a “quantum engine” that was created in collaboration with physicists including NIST’s Nicole Yunger Halpern – who pioneered the scientific field of quantum steampunk.

    I was so taken by the art and science of quantum steampunk that I promised Rosenbaum that I would chat with him and Yunger Halpern on the podcast – and here is that conversation. We begin by exploring the art of steampunk and how it is influenced by the technology of the 19th century. Then, we look at the physics of quantum steampunk, a field that weds modern concepts of quantum information with thermodynamics – which itself is a scientific triumph of the 19th century.

    • Philip Ball reviews Yunger Halpern’s 2022 book Quantum Steampunk: the Physics of Yesterday’s Tomorrow

     

    This podcast is supported by Atlas Technologies, specialists in custom aluminium and titanium vacuum chambers as well as bonded bimetal flanges and fittings used everywhere from physics labs to semiconductor fabs.

    The post Quantum steampunk: we explore the art and science appeared first on Physics World.

    https://physicsworld.com/a/quantum-steampunk-we-explore-the-art-and-science/
    Hamish Johnston

    Quantum fluids mix like oil and water

    Rayleigh–Taylor instability responsible for mushroom clouds appears in a two-component BEC

    The post Quantum fluids mix like oil and water appeared first on Physics World.

    A grid of diagrams and data showing how the system evolves from a metastable state in which two components, coloured blue and yellow, are stacked on top of each other and separated like oil and water, into a turbulent mixture where blobs of yellow and blue are all over the place. At the interim stages, a small applied disturbing force creates mushroom-like bulges of the yellow fluid into the blue fluid, while a larger force produces finger-like pillars.
    Progression of the Rayleigh–Taylor instability: Initially, an applied force (grey arrows) destabilizes the fluid interface between spin-up (blue) and spin-down (yellow) sodium atoms. The interface then develops nominally sinusoidal modulations where currents counterflowing in the two fluids (coloured arrows) induce vorticity at the interface (black symbols). The modulations subsequently acquire characteristic mushroom or spike shapes before dissolving into a turbulent mixture. (Courtesy: Taken from Science Advances 11 35, 10.1126/sciadv.adw9752, licensed under CC BY-NC)

    Researchers in the US have replicated a well-known fluid-dynamics process called the Rayleigh–Taylor instability on a quantum scale for the first time. The work opens the hydrodynamics of quantum gases to further exploration and could even create a new platform for understanding gravitational dynamics in the early universe.

    If you’ve ever tried mixing oil with water, you’ll understand how the Rayleigh–Taylor instability (RTI) can develop. Due to their different molecular structures and the nature of the forces between their molecules, the two fluids do not mix well. After some time, they separate, forming a clear interface between oil and water.

    Scientists have studied the dynamics of this interface upon perturbations – disturbances of the system – for nearly 150 years, with major work being done by the British physicists Lord Rayleigh in 1883 and Geoffrey Taylor in 1950. Under specific conditions related to the buoyant force of the fluid and the perturbative force causing the disturbance, they showed that this interface becomes unstable. Rather than simply oscillating, the system deviates from its initial state, leading to the formation of interesting geometric patterns such as mushroom clouds and filaments of gas in the Crab Nebula.

    An interface of spins

    To show that such dynamics occur not only in macroscopic structures, but also at a quantum scale, scientists at the University of Maryland and the Joint Quantum Institute (JQI) created a two-state quantum system using a Bose–Einstein condensate (BEC) of sodium (23Na) atoms. In this state of matter, the temperature is so low, the sodium atoms behave as a single coherent system, giving researchers precise control of their parameters.

    The JQI team confine this BEC in a two-dimensional optical potential that essentially produces a 100 µm × 100 µm sheet of atoms in the horizontal plane. The scientists then apply a microwave pulse that excites half of the atoms from the spin-down to the spin-up state. By adding a small magnetic field gradient along one of the horizontal axes, they induce a force (the Stern–Gerlach force) that acts on the two spin components in opposite directions due to the differing signs of their magnetic moments. This creates a clear interface between the spin-up and the spin-down atoms.

    Mushrooms and ripplons

    To initiate the RTI, the scientists need to perturb this two-component BEC by reversing the magnetic field gradient, which consequently reverses the direction of the induced force. According to Ian Spielman, who led the work alongside co-principal investigator Gretchen Campbell, this wasn’t as easy as it sounds. “The most difficult part was preparing the initial state (horizontal interface) with high quality, and then reliably inverting the gradient rapidly and accurately,” Spielman says.

    The researchers then investigated how the magnitude of this force difference, acting on the two sides of the interface, affected the dynamics of the two-component BEC. For a small differential force, they initially observed a sinusoidal modulation of the interface. After some time, the interface enters a nonlinear dynamics regime where the RTI manifests through the formation of mushroom clouds. Finally, it becomes a turbulent mixture. The larger the differential force, the more rapidly the system evolves.

    Photo of a darkened optics laboratory with screens and a vacuum system. The scene is bathed in orange light from the lasers used
    Spin up and spin down: Experimental setup showing the vacuum chamber where the sodium atoms are prepared. (Courtesy: Spielman–Campbell laboratory, JQI)

    While RTI dynamics like these were expected to occur in quantum fluids, Spielman points out that proving it required a BEC with the right internal interactions. The BEC of sodium atoms in their experimental setup is one such system.

    In general, Spielman says that cold atoms are a great tool for studying RTI because the numerical techniques used to describe them do not suffer from the same flaws as the Navier–Stokes equation used to model classical fluid dynamics. However, he notes that the transition to turbulence is “a tough problem that resides at the boundary between two conceptually different ways of thinking”, pushing the capabilities of both analytical and numerical techniques.

    The scientists were also able to excite waves known as ripplon modes that travel along the interface of the two-component BEC. These are equivalent to the classical capillary waves –“ripples” when a droplet impacts a water surface. Yanda Geng, a JQI PhD student working on this project, explains that every unstable RTI mode has a stable ripplon as a sibling. The difference is that ripplon modes only appear when a small sinusoidal modulation is added to the differential force. “Studying ripplon modes builds understanding of the underlying [RTI] mechanism,” Geng says.

    The flow of the spins

    In a further experiment, the team studied a phenomenon that occurs as the RTI progresses and the spin components of the BEC flow in opposite directions along part of their shared interface. This is known as an interfacial counterflow. By transferring half the atoms into the other spin state after initializing the RTI process, the scientists were able to generate a chain of quantum mechanical whirlpools – a vortex chain – along the interface in regions where interfacial counterflow occurred.

    Spielman, Campbell and their team are now working to create a cleaner interface in their two-component BEC, which would allow a wider range of experiments. “We are considering the thermal properties of this interface as a 1D quantum ‘string’,” says Spielman, adding that the height of such an interface is, in effect, an ultra-sensitive thermometer. Spielman also notes that interfacial waves in higher dimensions (such as a 2D surface) could be used for simulations of gravitational physics.

    The research is described in Science Advances.

    The post Quantum fluids mix like oil and water appeared first on Physics World.

    https://physicsworld.com/a/quantum-fluids-mix-like-oil-and-water/
    Ali Lezeik

    Large-area triple-junction perovskite solar cell achieves record efficiency

    Novel design strategies provide a step towards more efficient and stable perovskite–perovskite–silicon solar cells

    The post Large-area triple-junction perovskite solar cell achieves record efficiency appeared first on Physics World.

    Improving the efficiency of solar cells will likely be one of the key approaches to achieving net zero emissions in many parts of the world. Many types of solar cells will be required, with some of the better performances and efficiencies expected to come from multi-junction solar cells. Multi-junction solar cells comprise a vertical stack of semiconductor materials with distinct bandgaps, with each layer converting a different part of the solar spectrum to maximize conversion of the Sun’s energy to electricity.

    When there are no constraints on the choice of materials, triple-junction solar cells can outperform double-junction and single-junction solar cells, with a power conversion efficiency (PCE) of up to 51% theoretically possible. But material constraints – due to fabrication complexity, cost or other technical challenges – mean that many such devices still perform far from the theoretical limits.

    Perovskites are one of the most promising materials in the solar cell world today, but fabricating practical triple-junction solar cells beyond 1 cm2 in area has remained a challenge. A research team from Australia, China, Germany and Slovenia set out to change this, recently publishing a paper in Nature Nanotechnology describing the largest and most efficient triple-junction perovskite–perovskite–silicon tandem solar cell to date.

    When asked why this device architecture was chosen, Anita Ho-Baillie, one of the lead authors from The University of Sydney, states: “I am interested in triple-junction cells because of the larger headroom for efficiency gains”.

    Addressing surface defects in perovskite solar cells

    Solar cells formed from metal halide perovskites have potential to be commercially viable, due to their cost-effectiveness, efficiency, ease of fabrication and their ability to be paired with silicon in multi-junction devices. The ease of fabrication means that the junctions can be directly fabricated on top of each other through monolithic integration – which leads to only two terminal connections, instead of four or six. However, these junctions can still contain surface defects.

    To enhance the performance and resilience of their triple-junction cell (top and middle perovskite junctions on a bottom silicon cell), the researchers optimized the chemistry of the perovskite material and the cell design. They addressed surface defects in the top perovskite junction by replacing traditional lithium fluoride materials with piperazine-1,4-diium chloride (PDCl). They also replaced methylammonium – which is commonly used in perovskite cells – with rubidium. “The rubidium incorporation in the bulk and the PDCl surface treatment improved the light stability of the cell,” explains Ho-Baillie.

    To connect the two perovskite junctions, the team used gold nanoparticles on tin oxide. Because the gold was in a nanoparticle form, the junctions could be engineered to maximize the flow of electric charge and light absorption by the solar cell.

    “Another interesting aspect of the study is the visualization of the gold nanoparticles [using transmission electron microscopy] and the critical point when they become a semi-continuous film, which is detrimental to the multi-junction cell performance due to its parasitic absorption,” says Ho-Baillie. “The optimization for achieving minimal particle coverage while achieving sufficient ohmic contact for vertical carrier flow are useful insights”.

    Record performance for a large-scale perovskite triple-junction cell

    Using these design strategies, Ho-Baillie and colleagues developed a 16 cm2 triple-junction cell that achieved an independently certified steady-state PCE of 23.3% – the highest reported for a large-area device. While triple-junction perovskite solar cells have exhibited higher PCEs – with all-perovskite triple-junction cells reaching 28.7% and perovskite–perovskite–silicon devices reaching 27.1% – these were all achieved on a 1 cm2 cell, not a large-area cell.

    In this study, the researchers also developed a 1 cm2 cell that was close to the best, with a PCE of 27.06%, but it is the large-area cell that’s the record breaker. The 1 cm2 cell also passed the International Electrotechnical Commission’s (IEC) 61215 thermal cycling test, which exposes the cell to 200 cycles under extreme temperature swings, ranging from –40 to 85°C. During this test, the 1 cm2 cell retained 95% of its initial efficiency after 407 h of continuous operation.

    The combination of the successful thermal cycling test combined with the high efficiencies on a larger cell shows that there could be potential for this triple-junction architecture in real-world settings in the near future, even though they are still far away from their theoretical limits.

    The post Large-area triple-junction perovskite solar cell achieves record efficiency appeared first on Physics World.

    https://physicsworld.com/a/large-area-triple-junction-perovskite-solar-cell-achieves-record-efficiency/
    No Author

    Tim Berners-Lee: why the inventor of the Web is ‘optimistic, idealistic and perhaps a little naïve’

    Tara Shears reviews This is for Everyone: the Captivating Memoir from the Inventor of the World Wide Web by Tim Berners-Lee

    The post Tim Berners-Lee: why the inventor of the Web is ‘optimistic, idealistic and perhaps a little naïve’ appeared first on Physics World.

    It’s rare to come across someone who’s been responsible for enabling a seismic shift in society that has affected almost everyone and everything. Tim Berners-Lee, who invented the World Wide Web, is one such person. His new memoir This is for Everyone unfolds the history and development of the Web and, in places, of the man himself.

    Berners-Lee was born in London in 1955 to parents, originally from Birmingham, who met while working on the Ferranti Mark 1 computer and knew Alan Turing. Theirs was a creative, intellectual and slightly chaotic household. His mother could maintain a motorbike with fence wire and pliers, and was a crusader for equal rights in the workplace. His father – brilliant and absent minded – taught Berners-Lee about computers and queuing theory. A childhood of camping and model trains, it was, in Berners-Lee’s view, idyllic.

    Berners-Lee had the good fortune to be supported by a series of teachers and managers who recognized his potential and unique way of working. He studied physics at the University of Oxford (his tutor “going with the flow” of Berners-Lee’s unconventional notation and ability to approach problems from oblique angles) and built his own computer. After graduating, he married and, following a couple of jobs, took a six-month placement at the CERN particle-physics lab in Geneva in 1985.

    This placement set “a seed that sprouted into a tool that shook up the world”. Berners-Lee saw how difficult it was to share information stored in different languages in incompatible computer systems and how, in contrast, information flowed easily when researchers met over coffee, connected semi-randomly and talked. While at CERN, he therefore wrote a rough prototype for a program to link information in a type of web rather than a structured hierarchy.

    Back at CERN, Tim Berners-Lee developed his vision of a “universal portal” to information

    The placement ended and the program was ignored, but four years later Berners-Lee was back at CERN. Now divorced and soon to remarry, he developed his vision of a “universal portal” to information. It proved to be the perfect time. All the tools necessary to achieve the Web – the Internet, address labelling of computers, network cables, data protocols, the hypertext language that allowed cross-referencing of text and links on the same computer – had already been developed by others.

    Berners-Lee saw the need for a user-friendly interface, using hypertext that could link to information on other computers across the world. His excitement was “uncontainable”, and according to his line manager “few of us if any could understand what he was talking about”. But Berners-Lee’s managers supported him and freed his time away from his actual job to become the world’s first web developer.

    Having a vision was one thing, but getting others to share it was another. People at CERN only really started to use the Web properly once the lab’s internal phone book was made available on it. As a student at the time, I can confirm that it was much, much easier to use the Web than log on to CERN’s clunky IBM mainframe, where phone numbers had previously been stored.

    Wider adoption relied on a set of volunteer developers, working with open-source software, to make browsers and platforms that were attractive and easy to use. CERN agreed to donate the intellectual property for web software to the public domain, which helped. But the path to today’s Web was not smooth: standards risked diverging and companies wanted to build applications that hindered information sharing.

    Feeling that “the Web was outgrowing my institution” and “would be a distraction” to a lab whose core mission was physics, Berners-Lee moved to the Massachusetts Institute of Technology in 1994. There he founded the World Wide Web Consortium (W3C) to ensure consistent, accessible standards were followed by everyone as the Web developed into a global enterprise. The progression sounds straightforward although earlier accounts, such as James Gillies and Robert Caillau’s 2000 book How the Web Was Born, imply some rivalry between institutions that is glossed over here.

    Initially inclined to advise people to share good things and not search for bad things, Berners-Lee had reckoned without the insidious power of “manipulative and coercive” algorithms on social networks

    The rest is history, but not quite the history that Berners-Lee had in mind. By 1995 big business had discovered the possibilities of the Web to maximize influence and profit. Initially inclined to advise people to share good things and not search for bad things, Berners-Lee had reckoned without the insidious power of “manipulative and coercive” algorithms on social networks. Collaborative sites like Wikipedia are closer to his vision of an ideal Web; an emergent good arising from individual empowerment. The flip side of human nature seems to come as a surprise.

    The rest of the book brings us up to date with Berners-Lee’s concerns (data, privacy, misuse of AI, toxic online culture), his hopes (the good use of AI), a third marriage and his move into a data-handling business. There are some big awards and an impressive amount of name dropping; he is excited by Order of Merit lunches with the Queen and by sitting next to Paul McCartney’s family at the opening ceremony to the London Olympics in 2012. A flick through the index reveals names ranging from Al Gore and Bono to Lucien Freud. These are not your average computing technology circles.

    There are brief character studies to illustrate some of the main players, but don’t expect much insight into their lives. This goes for Berners-Lee too, who doesn’t step back to particularly reflect on those around him, or indeed his own motives beyond that vision of a Web for all enabling the best of humankind. He is firmly future focused.

    Still, there is no-one more qualified to describe what the Web was intended for, its core philosophy, and what caused it to develop to where it is today. You’ll enjoy the book whether you want an insight into the inner workings that make your web browsing possible, relive old and forgotten browser names, or see how big tech wants to monetize and monopolize your online time. It is an easy read from an important voice.

    The book ends with a passionate statement for what the future could be, with businesses and individuals working together to switch the Web from “the attention economy to the intention economy”. It’s a future where users are no longer distracted by social media and manipulated by attention-grabbing algorithms; instead, computers and services do what users want them to do, with the information that users want them to have.

    Berners-Lee is still optimistic, still an incurable idealist, still driven by vision. And perhaps still a little naïve too in believing that everyone’s values will align this time.

    • 2025 Macmillan 400pp £25.00/$30.00hb

    The post Tim Berners-Lee: why the inventor of the Web is ‘optimistic, idealistic and perhaps a little naïve’ appeared first on Physics World.

    https://physicsworld.com/a/tim-berners-lee-why-the-inventor-of-the-web-is-optimistic-idealistic-and-perhaps-a-little-naive/
    No Author

    New protocol makes an elusive superconducting signature measurable

    Physicists have designed a protocol to study high-temperature superconductivity on an experimentally realizable platform

    The post New protocol makes an elusive superconducting signature measurable appeared first on Physics World.

    Conversion of a hard-to-detect signal into a pattern that reveals d-wave pairing
    From immeasurable to measurable Microwave pulses and lattice depth control manipulate fermions on a lattice (left), converting a hard-to-detect signal into a brickwork pattern (right) that reveals d-wave pairing, a key signature of high-temperature superconductors. (Courtesy: Adapted from D K Mark et al. Phys. Rev. Lett. 135 123402 (2025))

    Understanding the mechanism of high-temperature superconductivity could unlock powerful technologies, from efficient energy transmission to medical imaging, supercomputing and more. Researchers at Harvard University and the Massachusetts Institute of Technology have designed a new protocol to study a candidate model for high-temperature superconductivity (HTS), described in Physical Review Letters.

    The model, known as the Fermi-Hubbard model, is believed to capture the essential physics of cuprate high-temperature superconductors, materials composed of copper and oxygen. The model describes fermions, such as electrons, moving on a lattice. The fermions experience two competing effects: tunnelling and on-site interaction. Imagine students in a classroom: they may expend energy to switch seats (tunnelling), avoid a crowded desk (repulsive on-site interaction) or share desks with friends (attractive on-site interaction). Such behaviour mirrors that of electrons moving between lattice sites.

    Daniel Mark, first author of the study, notes that: “After nearly four decades of research, there are many detailed numerical studies and theoretical models on how superconductivity can emerge from the Fermi-Hubbard model, but there is no clear consensus [on exactly how it emerges].”

    A precursor to understanding the underlying mechanism is testing whether the Fermi-Hubbard model gives rise to an important signature of cuprate HTS: d-wave pairing. This is a special type of electron pairing where the strength and sign of the pairing depend on the direction of electron motion. It contrasts with conventional low-temperature superconductors that exhibit s-wave pairing, in which the pairing strength is uniform in all directions.

    Although physicists have developed robust methods for simulating the Fermi-Hubbard model with ultracold atoms, measuring d-wave pairing has been notoriously difficult. The new protocol aims to change that.

    A change of perspective

    A key ingredient in the protocol is the team’s use of “repulsive-to-attractive mapping”. The physics of HTS is often described by the repulsive Fermi-Hubbard model, in which electrons pay an energetic penalty for occupying the same lattice site, like disagreeing students sharing a desk. In this model, detecting d-wave pairing requires fermions to maintain a fragile quantum state as they move over large distances, which necessitates carefully fine-tuned experimental parameters.

    To make the measurement more robust to experimental imperfection, the authors use a clever mathematical trick: they map from the repulsive model to the attractive one. In the attractive model, electrons receive an energetic benefit from being close together, like two friends in a classroom. The mapping is achieved by a particle–hole transformation, wherein spin-down electrons are reinterpreted as holes and vice versa. After mapping, the d-wave pairing signal becomes an observable that conserves local fermion number, thereby circumventing the challenge of long-range motion.

    Pulse sequence
    Pulse sequence Carefully timed pulses, including microwave, hopping and idling pulses transform the state of the system for easier readout. (Courtesy: D K Mark et al. Phys. Rev. Lett. 135 123402 (2025))

    In its initial form, the d-wave pairing signal is difficult to measure. Drawing inspiration from digital quantum gates, the researchers divide their complex system into subsystems composed of pairs of lattice sites or dimers. Then, they apply a pulse sequence to make the observable measurable by simply counting fermions – a standard technique in the lab.

    The pulse sequence begins with a global microwave pulse to manipulate the spin of the fermions, followed by a series of “hopping” and “idling” steps. The hopping step involves lowering the barrier between lattice sites, thereby increasing tunnelling. The idling step involves raising the barrier, allowing the system to evolve without tunnelling. Every step is carefully timed to reveal the d-wave pairing information at the end of the sequence.

    The researchers report that their protocol is sample-efficient, experimentally viable, and generalizable to other observables that conserve local fermion number and act on dimers.

    This work adds to a growing field that combines components of analogue quantum systems with digital gates to deeply study complex quantum phenomena. “All the experimental ingredients in our protocol have been demonstrated in existing experiments, and we are in discussion with several groups on possible use cases,” Mark tells Physics World.

    The post New protocol makes an elusive superconducting signature measurable appeared first on Physics World.

    https://physicsworld.com/a/new-protocol-makes-an-elusive-superconducting-signature-measurable/
    Candice Chua

    Interface engineered ferromagnetism

    Researchers enhance a 2D ferromagnetic material by layering with a topological insulator to reveal stronger, tuneable behaviour for next-generation quantum devices

    The post Interface engineered ferromagnetism appeared first on Physics World.

    Exchange-coupled interfaces offer a powerful route to stabilising and enhancing ferromagnetic properties in two-dimensional materials, such as transition metal chalcogenides. These materials exhibit strong correlations among charge, spin, orbital, and lattice degrees of freedom, making them an exciting area for emergent quantum phenomena.

    Cr₂Te₃’s crystal structure naturally forms layers that behave like two-dimensional sheets of magnetic material. Each layer has magnetic ordering (ferromagnetism), but the layers are not tightly bonded in the third dimension and are considered “quasi-2D.” These layers are useful for interface engineering. Using a vacuum-based technique for atomically precise thin-film growth, known as molecular beam epitaxy, the researchers demonstrate wafer-scale synthesis of Cr₂Te₃ down to monolayer thickness on insulating substrates. Remarkably, robust ferromagnetism persists even at the monolayer limit, a critical milestone for 2D magnetism.

    When Cr₂Te₃ is proximitized (an effect that occurs when one material is placed in close physical contact with another so that its properties are influenced by the neighbouring material) to a topological insulator, specifically (Bi,Sb)₂Te₃, the Curie temperature, the threshold between ferromagnetic and paramagnetic phases, increases from ~100 K to ~120 K. This enhancement is experimentally confirmed via polarized neutron reflectometry, which reveals a substantial boost in magnetization at the interface.

    Theoretical modelling attributes this magnetic enhancement to the Bloembergen–Rowland interaction which is a long-range exchange mechanism mediated by virtual intraband transitions. Crucially, this interaction is facilitated by the topological insulator’s topologically protected surface states, which are spin-polarized and robust against disorder. These states enable long-distance magnetic coupling across the interface, suggesting a universal mechanism for Curie temperature enhancement in topological insulator-coupled magnetic heterostructures.

    This work not only demonstrates a method for stabilizing 2D ferromagnetism but also opens the door to topological electronics, where magnetism and topology are co-engineered at the interface. Such systems could enable novel quantum hybrid devices, including spintronic components, topological transistors, and platforms for realizing exotic quasiparticles like Majorana fermions.

    Read the full article

    Enhanced ferromagnetism in monolayer Cr2Te3 via topological insulator coupling

    Yunbo Ou et al 2025 Rep. Prog. Phys. 88 060501

    Do you want to learn more about this topic?

    Interacting topological insulators: a review by Stephan Rachel (2018)

    The post Interface engineered ferromagnetism appeared first on Physics World.

    https://physicsworld.com/a/interface-engineered-ferromagnetism/
    Lorna Brigham

    Probing the fundamental nature of the Higgs Boson

    ATLAS researchers have provided compelling evidence for off-shell Higgs boson production with vastly increased confidence

    The post Probing the fundamental nature of the Higgs Boson appeared first on Physics World.

    First proposed in 1964, the Higgs boson plays a key role in explaining why many elementary particles of the Standard Model have a rest mass. Many decades later the Higgs boson was observed in 2012 by the ATLAS and CMS collaborations at the Large Hadron Collider (LHC), confirming the decades old prediction.  

    This discovery made headline news at the time and, since then, the two collaborations have been performing a series of measurements to establish the fundamental nature of the Higgs boson field and of the quantum vacuum. Researchers certainly haven’t stopped working on the Higgs though. In subsequent years, a series of measurements have been performed to establish the fundamental nature of the new particle. 

    One key measurement comes from studying a process known as off-shell Higgs boson production. This is the creation of Higgs bosons with a mass significantly higher than their typical on-shell mass of 125 GeV.  This phenomenon occurs due to quantum mechanics, which allows particles to temporarily fluctuate in mass.

    This kind of production is harder to detect but can reveal deeper insights into the Higgs boson’s properties, especially its total width, which relates to how long it exists before decaying. This in turn, allows us to test key predictions made by the Standard Model of particle physics.

    Previous observations of this process had been severely limited in their sensitivity. In order to improve on this, the ATLAS collaboration had to introduce a completely new way of interpreting their data (read here for more details).

    They were able to provide evidence for off-shell Higgs boson production with a significance of 2.5𝜎 (corresponding to a 99.38% likelihood), using events with four electrons or muons, compared to a significance of 0.8𝜎 using traditional methods in the same channel.

    The results mark an important step forward in understanding the Higgs boson as well as other high-energy particle physics phenomena.

    Read the full article

    Measurement of off-shell Higgs boson production in the decay channel using a neural simulation-based inference technique in 13 TeV pp collisions with the ATLAS detector – IOPscience

    The ATLAS Collaboration, 2025 Rep. Prog. Phys. 88 057803

    The post Probing the fundamental nature of the Higgs Boson appeared first on Physics World.

    https://physicsworld.com/a/probing-the-fundamental-nature-of-the-higgs-boson/
    Paul Mabey

    Fabrication and device performance of Ni0/Ga2O3 heterojunction power rectifiers

    Join the audience for a live webinar at 6 p.m. GMT/1 p.m EST on 19 November 2025

    Discover how NiO/Ga₂O₃ heterojunction rectifiers unlock high-performance power electronics with breakthrough thermal, radiation, and structural resilience—driving innovation in EVs, AI data centers, and aerospace systems

    The post Fabrication and device performance of Ni0/Ga<sub>2</sub>O<sub>3</sub> heterojunction power rectifiers appeared first on Physics World.

    ecs webinar image

    This talk shows how integrating p-type NiO to form NiO/Ga₂O₃ heterojunction rectifiers overcomes that barrier, enabling record-class breakdown and Ampere-class operation. It will cover device structure/process optimization, thermal stability to high temperatures, and radiation response – with direct ties to today’s priorities: EV fast charging, AI data‑center power systems, and aerospace/space‑qualified power electronics.

    An interactive Q&A session follows the presentation.

     

    Jian-Sian Li

    Jian-Sian Li received the PhD in chemical engineering from the University of Florida in 2024, where his research focused on NiO/β-Ga₂O₃ heterojunction power rectifiers, includes device design, process optimization, fast switching, high-temperature stability, and radiation tolerance (γ, neutron, proton). His work includes extensive electrical characterization and microscopy/TCAD analysis supporting device physics and reliability in harsh environments. Previously, he completed his BS and MS at National Taiwan University (2015, 2018), with research spanning phoretic/electrokinetic colloids, polymers for OFETs/PSCs, and solid-state polymer electrolytes for Li-ion batteries. He has since transitioned to industry at Micron Technology.

    The post Fabrication and device performance of Ni0/Ga<sub>2</sub>O<sub>3</sub> heterojunction power rectifiers appeared first on Physics World.

    https://physicsworld.com/a/fabrication-and-device-performance-of-ni0-ga2o3-heterojunction-power-rectifiers/
    No Author

    Randomly textured lithium niobate gives snapshot spectrometer a boost

    Compact system outperforms astronomical spectrometers

    The post Randomly textured lithium niobate gives snapshot spectrometer a boost appeared first on Physics World.

    A new integrated “snapshot spectroscopy” system developed in China can determine the spectral and spatial composition of light from an object with much better precision than other existing systems. The instrument uses randomly textured lithium niobate and its developers have used it for astronomical imaging and materials analysis – and they say that other applications are possible.

    Spectroscopy is crucial to analysis of all kinds of objects in science and engineering, from studying the radiation emitted by stars to identifying potential food contaminants. Conventional spectrometers – such as those used on telescopes – rely on diffractive optics to separate incoming light into its constituent wavelengths. This makes them inherently large, expensive and inefficient at rapid image acquisition as the light from each point source has to be spatially separated to resolve the wavelength components.

    In recent years researchers have combined computational methods with advanced optical sensors to create computational spectrometers with the potential to rival conventional instruments. One such approach is hyperspectral snapshot imaging, which captures both spectral and spatial information in the same image. There are currently two main snapshot-imaging techniques available. Narrowband-filtered snapshot spectral imagers comprise a mosaic pattern of narrowband filters and acquire an image by taking repeated snapshots at different wavelengths. However, these trade spectral resolution with spatial resolution, as each extra band requires its own tile within the mosaic. A more complex alternative design – the broadband-modulated snapshot spectral imager – uses a single, broadband detector covered with a spatially varying element such as a metasurface that interacts with the light and imprints spectral encoding information onto each pixel. However, these are complex to manufacture and their spectral resolution is limited to the nanometre scale.

    Random thicknesses

    In the new work, researchers led by Lu Fang at Tsinghua University in Beijing unveil a spectroscopy technique that utilizes the nonlinear optical properties of lithium niobate to achieve sub-Ångström spectral resolution in a simply fabricated, integrated snapshot detector they call RAFAEL. A lithium niobate layer with random, sub-wavelength thickness variations is surrounded by distributed Bragg reflectors, forming optical cavities. These are integrated into a stack with a set of electrodes. Each cavity corresponds to a single pixel. Incident light enters  from one side of a cavity, interacting with the lithium niobate repeatedly before exiting and being detected. Because lithium niobate is nonlinear, its response varies with the wavelength of the light.

    The researchers then applied a bias voltage using the electrodes. The nonlinear optical response of lithium niobate means that this bias alters its response to light differently at different wavelengths. Moreover, the random variation of the lithium niobate’s thickness around the surface means that the wavelength variation is spatially specific.

    The researchers designed a machine learning algorithm and trained it to use this variation of applied bias voltage with resulting wavelength detected at each point to reconstruct the incident wavelengths on the detector at each point in space.

    “The randomness is useful for making the equations independent,” explains Fang; “We want to have uncorrelated equations so we can solve them.”

    Thousands of stars

    The researchers showed that they could achieve 88 Hz snapshot spectroscopy on a grid of 2048×2048 pixels with a spectral resolution of 0.5 Å (0.05 nm) between wavelengths of 400–1000 nm. They demonstrated this by capturing the full atomic absorption spectra of up to 5600 stars in a single snapshot. This is a two to four orders of magnitude improvement in observational efficiency over world-class astronomical spectrometers. They also demonstrated other applications, including a materials analysis challenge involving the distinction of a real leaf from a fake one. The two looked identical at optical wavelengths, but, using its broader range of wavelengths, RAFAEL was able to distinguish between the two.

    The researchers are now attempting to improve the device further: “I still think that sub-Ångstrom is not the ending – it’s just the starting point,” says Fu. “We want to push the limit of our resolution to the picometre.” In addition, she says, they are working on further integration of the device – which requires no specialized lithography – for easier use in the field. “We’ve already put this technology on a drone platform,” she reveals. The team is also working with astronomical observatories such as Gran Telescopio Canarias in La Palma, Spain.

    The research is described in Nature.

    Computational imaging expert David Brady of Duke University in North Carolina is impressed by the instrument. “It’s a compact package with extremely high spectral resolution,” he says; “Typically an optical instrument, like a CMOS sensor that’s used here, is going to have between 10,000 and 100,000 photo-electrons per pixel.  That’s way too many photons for getting one measurement…I think you’ll see that with spectral imaging as is done here, but also with temporal imaging. People are saying you don’t need to go at 30 frames second, you can go at a million frames per second and push closer to the single photon limit, and then that would require you to do computation to figure out what it all means.”

    The post Randomly textured lithium niobate gives snapshot spectrometer a boost appeared first on Physics World.

    https://physicsworld.com/a/randomly-textured-lithium-niobate-gives-snapshot-spectrometer-a-boost/
    No Author

    Tumour-specific radiofrequency fields suppress brain cancer growth

    Low-level radiofrequency therapy inhibited the growth of glioblastoma cells and demonstrated clinical benefit in a patient with brain cancer

    The post Tumour-specific radiofrequency fields suppress brain cancer growth appeared first on Physics World.

    A research team headed up at Wayne State University School of Medicine in the US has developed a novel treatment for glioblastoma, based on exposure to low levels of radiofrequency electromagnetic fields (RF EMF). The researchers demonstrated that the new therapy slows the growth of glioblastoma cells in vitro and, for the first time, showed its feasibility and clinical impact in patients with brain tumours.

    The study, led by Hugo Jimenez and reported in Oncotarget, uses a device developed by TheraBionic that delivers amplitude-modulated 27.12 MHz RF EMF throughout the entire body, via a spoon-shaped antenna placed on the tongue. Using tumour-specific modulation frequencies, the device has already received US FDA approval for treating patients with advanced hepatocellular carcinoma (HCC, a liver cancer), while its safety and effectiveness are currently being assessed in clinical trials in patients with pancreatic, colorectal and breast cancer.

    In this latest work, the team investigated its use in glioblastoma, an aggressive and difficult to treat brain tumour.

    To identify the particular frequencies needed to treat glioblastoma, the team used a non-invasive biofeedback method developed previously to study patients with various types of cancer. The process involves measuring variations in skin electrical resistance, pulse amplitude and blood pressure while individuals are exposed to low levels of amplitude-modulated frequencies. The approach can identify the frequencies, usually between 1 Hz and 100 kHz, specific to a single tumour type.

    Jimenez and colleagues first examined the impact of glioblastoma-specific amplitude-modulated RF EMF (GBMF) on glioblastoma cells, exposing various cell lines to GBMF for 3 h per day at the exposure level used for patient treatments. After one week, GBMF decreased the proliferation of three glioblastoma cell lines (U251, BTCOE-4765 and BTCOE-4795) by 34.19%, 15.03% and 14.52%, respectively.

    The team note that the level of this inhibitive effect (15–34%) is similar to that observed in HCC cell lines (19–47%) and breast cancer cell lines (10–20%) treated with tumour-specific frequencies. A fourth glioblastoma cell line (BTCOE-4536) was not inhibited by GBMF, for reasons currently unknown.

    Next, the researchers examined the effect of GBMF on cancer stem cells, which are responsible for treatment resistance and cancer recurrence. The treatment decreased the tumour sphere-forming ability of U251 and BTCOE-4795 cells by 36.16% and 30.16%, respectively – also a comparable range to that seen in HCC and breast cancer cells.

    Notably, these effects were only induced by frequencies associated with glioblastoma. Exposing glioblastoma cells to HCC-specific modulation frequencies had no measurable impact and was indistinguishable from sham exposure.

    Looking into the underlying treatment mechanisms, the researchers hypothesized that – as seen in breast cancer and HCC – glioblastoma cell proliferation is mediated by T-type voltage-gated calcium channels (VGCC). In the presence of a VGCC blocker, GBMF did not inhibit cell proliferation, confirming that GBMF inhibition of cell proliferation depends on T-type VGCCs, in particular, a calcium channel known as CACNA1H.

    The team also found that GBMF blocks the growth of glioblastoma cells by modulating the “Mitotic Roles of Polo-Like Kinase” signalling pathway, leading to disruption of the cells’ mitotic spindles, critical structures in cell replication.

    A clinical first

    Finally, the researchers used the TheraBionic device to treat two patients: a 38-year-old patient with recurrent glioblastoma and a 47-year-old patient with the rare brain tumour oligodendroglioma. The first patient showed signs of clinical and radiological benefit following treatment; the second exhibited stable disease and tolerated the treatment well.

    “This is the first report showing feasibility and clinical activity in patients with brain tumour,” the authors write. “Similarly to what has been observed in patients with breast cancer and hepatocellular carcinoma, this report shows feasibility of this treatment approach in patients with malignant glioma and provides evidence of anticancer activity in one of them.”

    The researchers add that a previous dosimetric analysis of this technique measured a whole-body specific absorption rate (SAR, the rate of energy absorbed by the body when exposed to RF EMF) of 1.35 mW/kg and a peak spatial SAR (over 1 g of tissue) of 146–352 mW/kg. These values are well within the safety limits set by the ICNIRP (whole-body SAR of 80 mW/kg; peak spatial SAR of 2000 mW/kg). Organ-specific values for grey matter, white matter and the midbrain also had mean SAR ranges well within the safety limits.

    The team concludes that the results justify future preclinical and clinical studies of the TheraBionic device in this patient population. “We are currently in the process of designing clinical studies in patients with brain tumors,” Jimenez tells Physics World.

    The post Tumour-specific radiofrequency fields suppress brain cancer growth appeared first on Physics World.

    https://physicsworld.com/a/tumour-specific-radiofrequency-fields-suppress-brain-cancer-growth/
    Tami Freeman

    Entangled light leads to quantum advantage

    Number of measurements required to learn about the behaviour of a complex, noisy quantum system reduced by a factor of 1011

    The post Entangled light leads to quantum advantage appeared first on Physics World.

    Photo showing the optical components used to manipulate the quantum fluctuations of light
    Quantum manipulation: The squeezer – an optical parametric oscillator (OPO) that uses a nonlinear crystal inside an optical cavity to manipulate the quantum fluctuations of light – is responsible for the entanglement. (Courtesy: Jonas Schou Neergaard-Nielsen)

    Physicists at the Technical University of Denmark have demonstrated what they describe as a “strong and unconditional” quantum advantage in a photonic platform for the first time. Using entangled light, they were able to reduce the number of measurements required to characterize their system by a factor of 1011, with a correspondingly huge saving in time.

    “We reduced the time it would take from 20 million years with a conventional scheme to 15 minutes using entanglement,” says Romain Brunel, who co-led the research together with colleagues Zheng-Hao Liu and Ulrik Lund Andersen.

    Although the research, which is described in Science, is still at a preliminary stage, Brunel says it shows that major improvements are achievable with current photonic technologies. In his view, this makes it an important step towards practical quantum-based protocols for metrology and machine learning.

    From individual to collective measurement

    Quantum devices are hard to isolate from their environment and extremely sensitive to external perturbations. That makes it a challenge to learn about their behaviour.

    To get around this problem, researchers have tried various “quantum learning” strategies that replace individual measurements with collective, algorithmic ones. These strategies have already been shown to reduce the number of measurements required to characterize certain quantum systems, such as superconducting electronic platforms containing tens of quantum bits (qubits), by as much as a factor of 105.

    A photonic platform

    In the new study, Brunel, Liu, Andersen and colleagues obtained a quantum advantage in an alternative “continuous-variable” photonic platform. The researchers note that such platforms are far easier to scale up than superconducting qubits, which they say makes them a more natural architecture for quantum information processing. Indeed, photonic platforms have already been crucial to advances in boson sampling, quantum communication, computation and sensing.

    The team’s experiment works with conventional, “imperfect” optical components and consists of a channel containing multiple light pulses that share the same pattern, or signature, of noise. The researchers began by performing a procedure known as quantum squeezing on two beams of light in their system. This caused the beams to become entangled – a quantum phenomenon that creates such a strong linkage that measuring the properties of one instantly affects the properties of the other.

    The team then measured the properties of one of the beams (the “probe” beam) in an experiment known as a 100-mode bosonic displacement process. According to Brunel, one can imagine this experiment as being like tweaking the properties of 100 independent light modes, which are packets or beams of light. “A ‘bosonic displacement process’ means you slightly shift the amplitude and phase of each mode, like nudging each one’s brightness and timing,” he explains. “So, you then have 100 separate light modes, and each one is shifted in phase space according to a specific rule or pattern.”

    By comparing the probe beam to the second (“reference”) beam in a single joint measurement, Brunel explains that he and his colleagues were able to cancel out much of the uncertainties in these measurements. This meant they could extract more information per trial than they could have by characterizing the probe beam alone. This information boost, in turn, allowed them to significantly reduce the number of measurements – in this case, by a factor of 1011.

    While the DTU researchers acknowledge that they have not yet studied a practical, real-world system, they emphasize that their platform is capable of “doing something that no classical system will ever be able to do”, which is the definition of a quantum advantage. “Our next step will therefore be to study a more practical system in which we can demonstrate a quantum advantage,” Brunel tells Physics World.

    The post Entangled light leads to quantum advantage appeared first on Physics World.

    https://physicsworld.com/a/entangled-light-leads-to-quantum-advantage/
    Isabelle Dumé

    Queer Quest: a quantum-inspired journey of self-discovery

    Exploring how quantum ideas can inspire liberation, identity and community

    The post Queer Quest: a quantum-inspired journey of self-discovery appeared first on Physics World.

    This episode of Physics World Stories features an interview with Jessica Esquivel and Emily Esquivel – the creative duo behind Queer Quest. The event created a shared space for 2SLGBTQIA+ Black and Brown people working in science, technology, engineering, arts and mathematics (STEAM).

    Mental health professionals also joined Queer Quest, which was officially recognized by UNESCO as part of the International Year of Quantum Science and Technology (IYQ). Over two days in Chicago this October, the event brought science, identity and wellbeing into powerful conversation.

    Jessica Esquivel, a particle physicist and associate scientist at Fermilab, is part of the Muon g-2 experiment, pushing the limits of the Standard Model. Emily Esquivel is a licensed clinical professional counsellor. Together, they run Oyanova, an organization empowering Black and Brown communities through science and wellness.

    Quantum metaphors and resilience through connection

    queer quest advert - a woman's face inside a planet
    Courtesy: Oyanova

    Queer Quest blended keynote talks, with collective conversations, alongside meditation and other wellbeing activities. Panellists drew on quantum metaphors – such as entanglement – to explore identity, community and mental health.

    In a wide-ranging conversation with podcast host Andrew Glester, Jessica and Emily speak about the inspiration for the event, and the personal challenges they have faced within academia. They speak about the importance of building resilience through community connections, especially given the social tensions in the US right now.

    Hear more from Jessica Esquivel in her 2021 Physics World Stories appearance on the latest developments in muon science.

    This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

    Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

    Find out more on our quantum channel.

     

    The post Queer Quest: a quantum-inspired journey of self-discovery appeared first on Physics World.

    https://physicsworld.com/a/queer-quest-a-quantum-inspired-journey-of-self-discovery/
    James Dacey

    Fingerprint method can detect objects hidden in complex scattering media

    Novel imaging technique can find objects buried within in opaque environments, including biological tissues

    The post Fingerprint method can detect objects hidden in complex scattering media appeared first on Physics World.

    Buried metal spheres can be seen using new fingerprint imaging method
    Imaging buried objects Left: artistic impression of metal spheres buried in small glass beads; centre: conventional ultrasound image; right: the new technology can precisely determine the positions of the metal spheres. (Courtesy: TU Wien/Arthur Le Ber)

    Physicists have developed a novel imaging technique for detecting and characterizing objects hidden within opaque, highly scattering material. The researchers, from France and Austria, showed that their new mathematical approach, which utilizes the fact that hidden objects generate their own complex scattering pattern, or “fingerprint”, can work on biological tissue.

    Viewing the inside of the human body is challenging due to the scattering nature of tissue. With ultrasound, when waves propagate through tissue they are reflected, bounce around and scatter chaotically, creating noise that obscures the signal from the object that the medical practitioner is trying to see. The further you delve into the body the more incoherent the image becomes.

    There are techniques for overcoming these issues, but as scattering increases – in more complex media or as you push deeper through tissue – they struggle and unpicking the required signal becomes too complex.

    The scientists behind the latest research, from the Institut Langevin in Paris, France and TU Wien in Vienna, Austria, say that rather than compensating for scattering, their technique instead relies on detecting signals from the hidden object in the disorder.

    Objects buried in a material create their own complex scattering pattern, and the researchers found that if you know an object’s specific acoustic signal it’s possible to find it in the noise created by the surrounding environment.

    “We cannot see the object, but the backscattered ultrasonic wave that hits the microphones of the measuring device still carries information about the fact that it has come into contact with the object we are looking for,” explains Stefan Rotter, a theoretical physicist at TU Wien.

    Rotter and his colleagues examined how a series of objects scattered ultrasound waves in an interference-free environment. This created what they refer to as fingerprint matrices: measurements of the specific, characteristic way in which each object scattered the waves.

    The team then developed a mathematical method that allowed them to calculate the position of each object when hidden in a scattering medium, based on its fingerprint matrix.

    “From the correlations between the measured reflected wave and the unaltered fingerprint matrix, it is possible to deduce where the object is most likely to be located, even if the object is buried,” explains Rotter.

    The team tested the technique in three different scenarios. The first experiment trialled the ultrasound imaging of metal spheres in a dense suspension of glass beads in water. Conventional ultrasound failed in this setup and the spheres were completely invisible, but with their novel fingerprint method the researchers were able to accurately detect them.

    Next, to examine a medical application for the technique, the researchers embedded lesion markers often used to monitor breast tumours in a foam designed to mimic the ultrasound scattering of soft tissue. These markers can be challenging to detect due to scatterers randomly distributed in human tissue. With the fingerprint matrix, however, the researchers say that the markers were easy to locate.

    Finally, the team successfully mapped muscle fibres in a human calf using the technique. They claim this could be useful for diagnosing and monitoring neuromuscular diseases.

    According to Rotter and his colleagues, their fingerprint matrix method is a versatile and universal technique that could be applied beyond ultrasound to all fields of wave physics. They highlight radar and sonar as examples of sensing techniques where target identification and detection in noisy environments are long-standing challenges.

    “The concept of the fingerprint matrix is very generally applicable – not only for ultrasound, but also for detection with light,” Rotter says. “It opens up important new possibilities in all areas of science where a reflection matrix can be measured.”

    The researchers report their findings in Nature Physics.

    The post Fingerprint method can detect objects hidden in complex scattering media appeared first on Physics World.

    https://physicsworld.com/a/fingerprint-method-can-detect-objects-hidden-in-complex-scattering-media/
    No Author

    Ask me anything: Kirsty McGhee – ‘Follow what you love: you might end up doing something you never thought was an option’

    Kirsty McGhee explains how she became a science writer in industry

    The post Ask me anything: Kirsty McGhee – ‘Follow what you love: you might end up doing something you never thought was an option’ appeared first on Physics World.

    What skills do you use every day in your job?

    Obviously, I write: I wouldn’t be a very good science writer if I couldn’t. So communication skills are vital. Recently, for example, Qruise launched a new magnetic-resonance product for which I had to write a press release, create a new webpage and do social-media posts. That meant co-ordinating with lots of different people, finding out the key features to advertise, identifying the claims we wanted to make – and if we have the data to back those claims up. I’m not an expert in quantum computing or magnetic-resonance imagining or even marketing so I have to pick things up fast and then translate technically complex ideas from physics and software into simple messages for a broader audience. Thankfully, my colleagues are always happy to help. Science writing is a difficult task but I think I’m getting better at it.

    What do you like best and least about your job?

    I love the variety and the fact that I’m doing so many different things all the time. If there’s a day I feel I want something a little bit lighter, I can do some social media or the website, which is more creative. On the other hand, if I feel I could really focus in detail on something then I can write some documentation that is a little bit more technical. I also love the flexibility of remote working, but I do miss going to the office and socialising with my colleagues on a regular basis. You can’t get to know someone as well online, it’s nicer to have time with them in person.

    What do you know today, that you wish you knew when you were starting out in your career?

    That’s a hard one. It would be easy to say I wish I’d known earlier that I could combine science and writing and make a career out of that. On the other hand, if I’d known that, I might not have done my PhD – and if I’d gone into writing straight after my undergraduate degree, I perhaps wouldn’t be where I am now. My point is, it’s okay not to have a clear plan in life. As children, we’re always asked what we want to be – in my case, my dream from about the age of four was to be a vet. But then I did some work experience in a veterinary practice and I realized I’m really squeamish. It was only when I was 15 or 16 that I discovered I wanted to do physics because I liked it and was good at it. So just follow the things you love. You might end up doing something you never even thought was an option.

    The post Ask me anything: Kirsty McGhee – ‘Follow what you love: you might end up doing something you never thought was an option’ appeared first on Physics World.

    https://physicsworld.com/a/ask-me-anything-kirsty-mcghee-follow-what-you-love-you-might-end-up-doing-something-you-never-thought-was-an-option/
    Hamish Johnston

    New adaptive optics technology boosts the power of gravitational wave detectors

    FROnt Surface Type Irradiator, or FROSTI, will allow future detectors to run at higher laser powers, reducing noise and expanding capabilities

    The post New adaptive optics technology boosts the power of gravitational wave detectors appeared first on Physics World.

    Future versions of the Laser Interferometer Gravitational Wave Observatory (LIGO) will be able to run at much higher laser powers thanks to a sophisticated new system that compensates for temperature changes in optical components. Known as FROSTI (for FROnt Surface Type Irradiator) and developed by physicists at the University of California Riverside, US, the system will enable next-generation machines to detect gravitational waves emitted when the universe was just 0.1% of its current age, before the first stars had even formed.

    Gravitational waves are distortions in spacetime that occur when massive astronomical objects accelerate and collide. When these distortions pass through the four-kilometre-long arms of the two LIGO detectors, they create a tiny difference in the (otherwise identical) distance that light travels between the centre of the observatory and the mirrors located at the end of each arm. The problem is that detecting and studying gravitational waves requires these differences in distance to be measured with an accuracy of 10-19 m, which is 1/10 000th the size of a proton.

    Extending the frequency range

    LIGO overcame this barrier 10 years ago when it detected the gravitational waves produced when two black holes located roughly 1.3 billion light–years from Earth merged. Since then, it and two smaller facilities, KAGRA and VIRGO, have observed many other gravitational waves at frequencies ranging from 30–2000 Hz.

    Observing waves at lower and higher frequencies in the gravitational wave spectrum remains challenging, however. At lower frequencies (around 10–30 Hz), the problem stems from vibrational noise in the mirrors. Although these mirrors are hefty objects – each one measures 34 cm across, is 20 cm thick and has a mass of around 40 kg – the incredible precision required to detect gravitational waves at these frequencies means that even the minute amount of energy they absorb from the laser beam is enough to knock them out of whack.

    At higher frequencies (150 – 2000 Hz), measurements are instead limited by quantum shot noise. This is caused by the random arrival time of photons at LIGO’s output photodetectors and is a fundamental consequence of the fact that the laser field is quantized.

    A novel adaptive optics device

    Jonathan Richardson, the physicist who led this latest study, explains that FROSTI is designed to reduce quantum shot noise by allowing the mirrors to cope with much higher levels of laser power. At its heart is a novel adaptive optics device that is designed to precisely reshape the surfaces of LIGO’s main mirrors under laser powers exceeding 1 megawatt (MW), which is nearly five times the power used at LIGO today.

    Though its name implies cooling, FROSTI actually uses heat to restore the mirror’s surface to its original shape. It does this by projecting infrared radiation onto test masses in the interferometer to create a custom heat pattern that “smooths out” distortions and so allows for fine-tuned, higher-order corrections.

    The single most challenging aspect of FROSTI’s design, and one that Richardson says shaped its entire concept, is the requirement that it cannot introduce even more noise into the LIGO interferometer. “To meet this stringent requirement, we had to use the most intensity-stable radiation source available – that is, an internal blackbody emitter with a long thermal time constant,” he tells Physics World. “Our task, from there, was to develop new non-imaging optics capable of reshaping the blackbody thermal radiation into a complex spatial profile, similar to one that could be created with a laser beam.”

    Richardson anticipates that FROSTI will be a critical component for future LIGO upgrades – upgrades that will themselves serve as blueprints for even more sensitive next-generation observatories like the proposed Cosmic Explorer in the US and the Einstein Telescope in Europe. “The current prototype has been tested on a 40-kg LIGO mirror, but the technology is scalable and will eventually be adapted to the 440-kg mirrors envisioned for Cosmic Explorer,” he says.

    Jan Harms, a physicist at Italy’s Gran Sasso Science Institute who was not involved in this work, describes FROSTI as “an ingenious concept to apply higher-order corrections to the mirror profile.” Though it still needs to pass the final test of being integrated into the actual LIGO detectors, Harms notes that “the results from the prototype are very promising”.

    Richardson and colleagues are continuing to develop extensions to their technology, building on the successful demonstration of their first prototype. “In the future, beyond the next upgrade of LIGO (A+), the FROSTI radiation will need to be shaped into an even more complex spatial profile to enable the highest levels of laser power (1.5 MW) ultimately targeted,” explains Richardson. “We believe this can be achieved by nesting two or more FROSTI actuators together in a single composite, with each targeting a different radial zone of the test mass surfaces. This will allow us to generate extremely finely-matched optical wavefront corrections.”

    The present study is detailed in Optica.

    The post New adaptive optics technology boosts the power of gravitational wave detectors appeared first on Physics World.

    https://physicsworld.com/a/new-adaptive-optics-technology-boosts-the-power-of-gravitational-wave-detectors/
    Isabelle Dumé

    A SMART approach to treating lung cancers in challenging locations

    Stereotactic MR-guided adaptive radiotherapy could prove a safe and effective treatment option for patients with centrally located lung cancers

    The post A SMART approach to treating lung cancers in challenging locations appeared first on Physics World.

    Radiation treatment for patients with lung cancer represents a balancing act, particularly if malignant lesions are centrally located near to critical structures. The radiation may destroy the tumour, but vital organs may be seriously damaged as well.

    The standard treatment for non-small cell lung cancer (NSCLC) is stereotactic ablative body radiotherapy (SABR), which delivers intense radiation doses in just a few treatment sessions and achieves excellent local control. For ultracentral lung legions, however – defined as having a planning target volume (PTV) that abuts or overlaps the proximal bronchial tree, oesophagus or pulmonary vessels – the high risk of severe radiation toxicity makes SABR highly challenging.

    A research team at GenesisCare UK, an independent cancer care provider operating nine treatment centres in the UK, has now demonstrated that stereotactic MR-guided adaptive radiotherapy (SMART)-based SABR may be a safer and more effective option for treating ultracentral metastatic lesions in patients with histologically confirmed NSCLC. They report their findings in Advances in Radiation Oncology.

    SMART uses diagnostic-quality MR scans to provide real-time imaging, 3D multiplanar soft-tissue tracking and automated beam control of an advanced linear accelerator. The idea is to use daily online volume adaptation and plan re-optimization to account for any changes in tumour size and position relative to organs-at-risk (OAR). Real-time imaging enables treatment in breath-hold with gated beam delivery (automatically pausing delivery if the target moves outside a defined boundary), eliminating the need for an internal target volume and enabling smaller PTV margins.

    The approach offers potential to enhance treatment precision and target coverage while improving sparing of adjacent organs compared with conventional SABR, first author Elena Moreno-Olmedo and colleagues contend.

    A safer treatment option

    The team conducted a study to assess the incidence of SABR-related toxicities in patients with histologically confirmed NSCLC undergoing SMART-based SABR. The study included 11 patients with 18 ultracentral lesions, the majority of whom had oligometastatic or olioprogressive disease.

    Patients received five to eight treatment fractions, to a median dose of 40 Gy (ranging from 30 to 60 Gy). The researchers generated fixed-field SABR plans with dosimetric aims including a PTV V100% (the volume receiving at least 100% of the prescription dose) of 95% or above, a PTV V95% of 98% or above and a maximum dose of between110% and 140%. PTV coverage was compromised where necessary to meet OAR constraints, with a minimum PTV V100% of at least 70%.

    SABR was performed using a 6 MV 0.35 T MRIdian linac with gated delivery during repeated breath-holds, under continuous MR guidance. Based on daily MRI scans, online plan adaptation was performed for all of the 78 delivered fractions.

    The researchers report that both the PTV volume and PTV overlap with ultracentral OARs were reduced in SMART treatments compared with conventional SABR. The median SMART PTV was 10.1 cc, compared with 30.4 cc for the simulated SABR PTV, while the median PTV overlap with OARs was 0.85 cc for SMART (8.4% of the PTV) and 4.7 cc for conventional SABR.

    In terms of treatment-related side effects for SMART, the rates of acute and late grade 1–2 toxicities were 54% and 18%, respectively, with no grade 3–5 toxicities observed. This demonstrates the technique’s increased safety compared with non-adaptive SABR treatments, which have exhibited severe rates of toxicity, including treatment-related deaths, in ultracentral tumours.

    Two-thirds of patients were alive at the median follow-up point of 28 months, and 93% were free from local progression at 12 months. The median progression-free survival was 5.8 months and median overall survival was 20 months.

    Acknowledging the short follow-up time frame, the researchers note that additional late toxicities may occur. However, they are hopeful that SMART will be considered as a favourable treatment option for patients with ultracentral NSCLC lesions.

    “Our analysis demonstrates that hypofractionated SMART with daily online adaptation for ultracentral NSCLC achieved comparable local control to conventional non-adaptive SABR, with a safer toxicity profile,” they write. “These findings support the consideration of SMART as a safer and effective treatment option for this challenging subgroup of thoracic tumours.”

    The SUNSET trial

    SMART-based SABR radiotherapy remains an emerging cancer treatment that’s not available yet in many cancer treatment centres. Despite the high risk for patients with ultracentral tumours, SABR is the standard treatment for inoperable NSCLC.

    The phase 1 clinical trial, Stereotactic radiation therapy for ultracentral NSCLC: a safety and efficacy trial (SUNSET), assessed the use of SBRT for ultracentral tumours in 30 patients with early-stage NSCLC treated at five Canadian cancer centres. In all cases, the PTVs touched or overlapped the proximal bronchial tree, the pulmonary artery, the pulmonary vein or the oesophagus. Led by Meredith Giuliani of the Princess Margaret Cancer Centre, the trial aimed to determine the maximum tolerated radiation dose associated with a less than 30% rate of grade 3–5 toxicity within two years of treatment.

    All patients received 60 Gy in eight fractions. Dose was prescribed to deliver a PTV V100% of 95%, a PTV V90% of 99% and a maximum dose of no more than 120% of the prescription dose, with OAR constraints prioritized over PTV coverage. All patients had daily cone-beam CT imaging to verify tumour position before treatment.

    At a median follow-up of 37 months, two patients (6.7%) experienced dose-limiting grade 3–5 toxicities – an adverse event rate within the prespecified acceptability criteria. The three-year overall survival was 72.5% and the three-year progression-free survival was 66.1%.

    In a subsequent dosimetric analysis, the researchers report that they did not identify any relationship between OAR dose and toxicity, within the dose constraints used in the SUNSET trial. They note that 73% of patients could be treated without compromise of the PTV, and where compromise was needed, the mean PTV D95 (the minimum dose delivered to 95% of the PTV) remained high at 52.3 Gy.

    As expected, plans that overlapped with central OARs were associated with worse local control, but PTV undercoverage was not. “[These findings suggest] that the approach of reducing PTV coverage to meet OAR constraints does not appear to compromise local control, and that acceptable toxicity rates are achievable using 60 Gy in eight fractions,” the team writes. “In the future, use of MRI or online adaptive SBRT may allow for safer treatment delivery by limiting dose variation with anatomic changes.”

    The post A SMART approach to treating lung cancers in challenging locations appeared first on Physics World.

    https://physicsworld.com/a/a-smart-approach-to-treating-lung-cancers-in-challenging-locations/
    No Author

    Spiral catheter optimizes drug delivery to the brain

    New design could help treat a wide range of neurological disorders

    The post Spiral catheter optimizes drug delivery to the brain appeared first on Physics World.

    Researchers in the United Arab Emirates have designed a new catheter that can deliver drugs to entire regions of the brain. Developed by Batoul Khlaifat and colleagues at New York University Abu Dhabi, the catheter’s helical structure and multiple outflow ports could make it both safer and more effective for treating a wide range of neurological disorders.

    Modern treatments for brain-related conditions including Parkinson’s disease, epilepsy, and tumours often involve implanting microfluidic catheters that deliver controlled doses of drug-infused fluids to highly localized regions of the brain. Today, these implants are made from highly flexible materials that closely mimic the soft tissue of the brain. This makes them far less invasive than previous designs.

    However, there is still much room for improvement, as Khlaifat explains. “Catheter design and function have long been limited by the neuroinflammatory response after implantation, as well as the unequal drug distribution across the catheter’s outlets,” she says.

    A key challenge with this approach is that each of the brain’s distinct regions has highly irregular shapes, which makes it incredibly difficult to target via single drug doses. Instead, doses must be delivered either through repeated insertions from a single port at the end of a catheter, or through single insertions across multiple co-implanted catheters. Either way, the approach is highly invasive, and runs the risk of further trauma to the brain.

    Multiple ports

    In their study, Khlaifat’s team explored how many of these problems stem from existing catheter designs. They tend to be simple tubes with single input and output ports at either end. Using fluid dynamics simulations, they started by investigating how drug outflow would change when multiple output ports are positioned along the length of the catheter.

    To ensure this outflow is delivered evenly, they carefully adjusted the diameter of each port to account for the change in fluid pressure along the catheter’s length – so that four evenly spaced ports could each deliver roughly one quarter of the total flow. Building on this innovation, the researchers then explored how the shape of the catheter itself could be adjusted to optimize delivery even further.

    “We varied the catheter design from a straight catheter to a helix of the same small diameter, allowing for a larger area of drug distribution in the target implantation region with minimal invasiveness,” explains team member Khalil Ramadi. “This helical shape also allows us to resist buckling on insertion, which is a major problem for miniaturized straight catheters.”

    Helical catheter

    Based on their simulations, the team fabricated a helical catheter the call Strategic Precision Infusion for Regional Administration of Liquid, or SPIRAL. In their first set of experiments, they tested their simulations in controlled lab conditions. They verified their prediction of even outflow rates across the catheter’s outlets.

    “Our helical device was also tested in mouse models alongside its straight counterpart to study its neuroinflammatory response,” Khlaifat says. “There were no significant differences between the two designs.”

    Having validated the safety of their approach, the researchers are now hopeful that SPIRAL could pave the way for new and improved methods for targeted drug delivery within the brain. With the ability to target entire regions of the brain with smaller, more controlled doses, this future generation of implanted catheters could ultimately prove to be both safer and more effective than existing designs.

    “These catheters could be optimized for each patient through our computational framework to ensure only regions that require dosing are exposed to therapy, all through a single insertion point in the skull,” describes team member Mahmoud Elbeh. “This tailored approach could improve therapies for brain disorders such as epilepsy and glioblastomas.”

    The research is described in the Journal of Neural Engineering.

    The post Spiral catheter optimizes drug delivery to the brain appeared first on Physics World.

    https://physicsworld.com/a/spiral-catheter-optimizes-drug-delivery-to-the-brain/
    No Author

    Performance metrics and benchmarks point the way to practical quantum advantage

    NPL is coordinating a broad-scope UK research initiative on performance metrics and benchmarking for quantum computers

    The post Performance metrics and benchmarks point the way to practical quantum advantage appeared first on Physics World.

    Quantum connections Measurement scientists are seeking to understand and quantify the relative performance of quantum computers from different manufacturers as well as across the myriad platform technologies. (Courtesy: iStock/Bartlomiej Wroblewski)

    From quantum utility today to quantum advantage tomorrow: incumbent technology companies – among them Google, Amazon, IBM and Microsoft – and a wave of ambitious start-ups are on a mission to transform quantum computing from applied research endeavour to mainstream commercial opportunity. The end-game: quantum computers that can be deployed at-scale to perform computations significantly faster than classical machines while addressing scientific, industrial and commercial problems beyond the reach of today’s high-performance computing systems.

    Meanwhile, as technology translation gathers pace across the quantum supply chain, government laboratories and academic scientists must maintain their focus on the “hard yards” of precompetitive research. That means prioritizing foundational quantum hardware and software technologies, underpinned by theoretical understanding, experimental systems, device design and fabrication – and pushing out along all these R&D pathways simultaneously.

    Bringing order to disorder

    Equally important is the requirement to understand and quantify the relative performance of quantum computers from different manufacturers as well as across the myriad platform technologies – among them superconducting circuits, trapped ions, neutral atoms as well as photonic and semiconductor processors. A case study in this regard is a broad-scope UK research collaboration that, for the past four years, has been reviewing, collecting and organizing a holistic taxonomy of metrics and benchmarks to evaluate the performance of quantum computers against their classical counterparts as well as the relative performance of competing quantum platforms.

    Funded by the National Quantum Computing Centre (NQCC), which is part of the UK National Quantum Technologies Programme (NQTP), and led by scientists at the National Physical Laboratory (NPL), the UK’s National Metrology Institute, the cross-disciplinary consortium has taken on an endeavour that is as sprawling as it is complex. The challenge lies in the diversity of quantum hardware platforms in the mix; also the emergence of two different approaches to quantum computing – one being a gate-based framework for universal quantum computation, the other an analogue approach tailored to outperforming classical computers on specific tasks.

    “Given the ambition of this undertaking, we tapped into a deep pool of specialist domain knowledge and expertise provided by university colleagues at Edinburgh, Durham, Warwick and several other centres-of-excellence in quantum,” explains Ivan Rungger, a principal scientist at NPL, professor in computer science at Royal Holloway, University of London, and lead scientist on the quantum benchmarking project. That core group consulted widely within the research community and with quantum technology companies across the nascent supply chain. “The resulting study,” adds Rungger, “positions transparent and objective benchmarking as a critical enabler for trust, comparability and commercial adoption of quantum technologies, aligning closely with NPL’s mission in quantum metrology and standards.”

    Not all metrics are equal – or mature

    2025-10-npl-na-aqml-image
    Made to measure NPL’s Institute for Quantum Standards and Technology (above) is the UK’s national metrology institute for quantum science. (Courtesy: NPL)

    For context, a number of performance metrics used to benchmark classical computers can also be applied directly to quantum computers, such as the speed of operations, the number of processing units, as well as the probability of errors to occur in the computation. That only goes so far, though, with all manner of dedicated metrics emerging in the past decade to benchmark the performance of quantum computers – ranging from their individual hardware components to entire applications.

    Complexity reigns, it seems, and navigating the extensive literature can prove overwhelming, while the levels of maturity for different metrics varies significantly. Objective comparisons aren’t straightforward either – not least because variations of the same metric are commonly deployed; also the data disclosed together with a reported metric value is often not sufficient to reproduce the results.

    “Many of the approaches provide similar overall qualitative performance values,” Rungger notes, “but the divergence in the technical implementation makes quantitative comparisons difficult and, by extension, slows progress of the field towards quantum advantage.”

    The task then is to rationalize the metrics used to evaluate the performance for a given quantum hardware platform to a minimal yet representative set agreed across manufacturers, algorithm developers and end-users. These benchmarks also need to follow some agreed common approaches to fairly and objectively evaluate quantum computers from different equipment vendors.

    With these objectives in mind, Rungger and colleagues conducted a deep-dive review that has yielded a comprehensive collection of metrics and benchmarks to allow holistic comparisons of quantum computers, assessing the quality of hardware components all the way to system-level performance and application-level metrics.

    Drill down further and there’s a consistent format for each metric that includes its definition, a description of the methodology, the main assumptions and limitations, and a linked open-source software package implementing the methodology. The software transparently demonstrates the methodology and can also be used in practical, reproducible evaluations of all metrics.

    “As research on metrics and benchmarks progresses, our collection of metrics and the associated software for performance evaluation are expected to evolve,” says Rungger. “Ultimately, the repository we have put together will provide a ‘living’ online resource, updated at regular intervals to account for community-driven developments in the field.”

    From benchmarking to standards

    Innovation being what it is, those developments are well under way. For starters, the importance of objective and relevant performance benchmarks for quantum computers has led several international standards bodies to initiate work on specific areas that are ready for standardization – work that, in turn, will give manufacturers, end-users and investors an informed evaluation of the performance of a range of quantum computing components, subsystems and full-stack platforms.

    What’s evident is that the UK’s voice on metrics and benchmarking is already informing the collective conversation around standards development. “The quantum computing community and international standardization bodies are adopting a number of concepts from our approach to benchmarking standards,” notes Deep Lall, a quantum scientist in Rungger’s team at NPL and lead author of the study. “I was invited to present our work to a number of international standardization meetings and scientific workshops, opening up widespread international engagement with our research and discussions with colleagues across the benchmarking community.”

    He continues: “We want the UK effort on benchmarking and metrics to shape the broader international effort. The hope is that the collection of metrics we have pulled together, along with the associated open-source software provided to evaluate them, will guide the development of standardized benchmarks for quantum computers and speed up the progress of the field towards practical quantum advantage.”

    That’s a view echoed – and amplified – by Cyrus Larijani, NPL’s head of quantum programme. “As we move into the next phase of NPL’s quantum strategy, the importance of evidence-based decision making becomes ever-more critical,” he concludes. “By grounding our strategic choices in robust measurement science and real-world data, we ensure that our innovations not only push the boundaries of quantum technology but also deliver meaningful impact across industry and society.”

    Further reading

    Deep Lall et al. 2025 A  review and collection of metrics and benchmarks for quantum computers: definitions, methodologies and software https://arxiv.org/abs/2502.06717

    The headline take from NQCC

    Quantum computing technology has reached the stage where a number of methods for performance characterization are backed by a large body of real-world implementation and use, as well as by theoretical proofs. These mature benchmarking methods will benefit from commonly agreed-upon approaches that are the only way to fairly, unambiguously and objectively benchmark quantum computers from different manufacturers.

    “Performance benchmarks are a fundamental enabler of technology innovation in quantum computing,” explains Konstantinos Georgopoulos, who heads up the NQCC’s quantum applications team and is responsible for the centre’s liaison with the NPL benchmarking consortium. “How do we understand performance? How do we compare capabilities? And, of course, what are the metrics that help us to do that? These are the leading questions we addressed through the course of this study.

    ”If the importance of benchmarking is a given, so too is collaboration and the need to bring research and industry stakeholders together from across the quantum ecosystem. “I think that’s what we achieved here,” says Georgopoulos. “The long list of institutions and experts who contributed their perspectives on quantum computing was crucial to the success of this project. What we’ve ended up with are better metrics, better benchmarks, and a better collective understanding to push forward with technology translation that aligns with end-user requirements across diverse industry settings.”

    End note: NPL retains copyright on this article.

    The post Performance metrics and benchmarks point the way to practical quantum advantage appeared first on Physics World.

    https://physicsworld.com/a/performance-metrics-and-benchmarks-point-the-way-to-practical-quantum-advantage/
    No Author

    Quantum computing and AI join forces for particle physics

    We explore how new computing technologies could guide future LHC experiments

    The post Quantum computing and AI join forces for particle physics appeared first on Physics World.

    This episode of the Physics World Weekly podcast explores how quantum computing and artificial intelligence can be combined to help physicists search for rare interactions in data from an upgraded Large Hadron Collider.

    My guest is Javier Toledo-Marín, and we spoke at the Perimeter Institute in Waterloo, Canada. As well as having an appointment at Perimeter, Toledo-Marín is also associated with the TRIUMF accelerator centre in Vancouver.

    Toledo-Marín and colleagues have recently published a paper called “Conditioned quantum-assisted deep generative surrogate for particle–calorimeter interactions”.

    Delft logo

    This podcast is supported by Delft Circuits.

    As gate-based quantum computing continues to scale, Delft Circuits provides the i/o solutions that make it possible.

    The post Quantum computing and AI join forces for particle physics appeared first on Physics World.

    https://physicsworld.com/a/quantum-computing-and-ai-join-forces-for-particle-physics/
    Hamish Johnston

    Master’s programme takes microelectronics in new directions

    Combining a solid foundation in current production technologies with the chance to explore emerging materials and structures, the course prepares students for diverse careers in microelectronics.

    The post Master’s programme takes microelectronics in new directions appeared first on Physics World.

    hong-kong-university-na-main-image
    Professor Zhao Jiong, who leads a Master’s programme in microelectronics technology and material, has been recognized for his pioneering research in 2d ferroelectronics (Courtesy: PolyU)

    The microelectronics sector is known for its relentless drive for innovation, continually delivering performance and efficiency gains within ever more compact form factors. Anyone aspiring to build a career in this fast-moving field needs not just a thorough grounding in current tools and techniques, but also an understanding of the next-generation materials and structures that will propel future progress.

    That’s the premise behind a Master’s programme in microelectronics technology and materials at the Hong Kong Polytechnic University (PolyU). Delivered by the Department for Applied Physics, globally recognized for its pioneering research in technologies such as two-dimensional materials, nanoelectronics and artificial intelligence, the aim is to provide students with both the fundamental knowledge and practical skills they need to kickstart their professional future – whether they choose to pursue further research or to find a job in industry.

    “The programme provides students with all the key skills they need to work in microelectronics, such as circuit design, materials processing and failure analysis,” says programme leader Professor Zhao Jiong, who research focuses on 2D ferroelectrics. “But they also have direct access to more than 20 faculty members who are actively investigating novel materials and structures that go beyond silicon-based technologies.”

    The course in also unusual in providing a combined focus on electronics engineering and materials science, providing students with a thorough understanding of the underlying semiconductors and device structures as well as their use in mass-produced integrated circuits. That fundamental knowledge is reinforced through regular experimental work, providing the students with hands-on experience of fabricating and testing electronic devices. “Our cleanroom laboratory is equipped with many different instruments for microfabrication, including thin-film deposition, etching and photolithography, as well as advanced characterization tools for understanding their operating mechanisms and evaluating their performance,” adds Zhao.

    In a module focusing on thin-film materials, for example, students gain valuable experience from practical sessions that enable them to operate the equipment for different growth techniques, such as sputtering, molecular beam epitaxy, and both physical and chemical vapour deposition. In another module on materials analysis and characterization, the students are tasked with analysing the layered structure of a standard computer chip by making cross-sections that can be studied with a scanning electron microscope.

    During the programme students have access to a cleanroom laboratory that gives them hand-on experience of using advanced tools for fabricating and characterizing electronic materials and structures (Courtesy: PolyU)

    That practical experience extends to circuit design, with students learning how to use state-of-the-art software tools for configuring, simulating and analysing complex electronic layouts. “Through this experimental work students gain the technical skills they need to design and fabricate integrated circuits, and to optimize their performance and reliability through techniques like failure analysis,” says Professor Dai Jiyan, PolyU Associate Dean of Students, who also teaches the module on thin-film materials. “This hands-on experience helps to prepare them for working in a manufacturing facility or for continuing their studies at the PhD level.”

    Also integrated into the teaching programme is the use of artificial intelligence to assist key tasks, such as defect analysis, materials selection and image processing. Indeed, PolyU has established a joint laboratory with Huawei to investigate possible applications of AI tools in electronic design, providing the students with early exposure to emerging computational methods that are likely to shape the future of the microelectronics industry. “One of our key characteristics is that we embed AI into our teaching and laboratory work,” says Dai. “Two of the modules are directly related to AI, while the joint lab with Huawei helps students to experiment with using AI in circuit design.”

    Now in its third year, the Master’s programme was designed in collaboration with Hong Kong’s Applied Science and Technology Research Institute (ASTRI), established in 2000 to enhance the competitiveness of the region through the use of advanced technologies. Researchers at PolyU already pursue joint projects with ASTRI in areas like chip design, microfabrication and failure analysis. As part of the programme, these collaborators are often invited to give guest lectures or to guide the laboratory work. “Sometimes they even provide some specialized instruments for the students to use in their experiments,” says Zhao. “We really benefit from this collaboration.”

    Once primed with the knowledge and experience from the taught modules, the students have the opportunity to work alongside one of the faculty members on a short research project. They can choose whether to focus on a topic that is relevant to present-day manufacturing, such as materials processing or advanced packaging technologies, or to explore the potential of emerging materials and devices across applications ranging from solar cells and microfluidics to next-generation memories and neuromorphic computing.

    “It’s very interesting for the students to get involved in these projects,” says Zhao. “They learn more about the research process, which can make them more confident to take their studies to the next level. All of our faculty members are engaged in important work, and we can guide the students towards a future research field if that’s what they are interested in.”

    There are also plenty of progression opportunities for those who are more interested in pursuing a career in industry. As well as providing support and advice through its joint lab in AI, Huawei arranges visits to its manufacturing facilities and offers some internships to interested students. PolyU also organizes visits to Hong Kong’s Science Park, home to multinational companies such as Infineon as well as a large number of start-up companies in the microelectronics sector. Some of these might support a student’s research project, or offer an internship in areas such as circuit design or microfabrication.

    The international outlook offered by PolyU has made the Master’s programme particularly appealing to students from mainland China, but Zhao and Dai believe that the forward-looking ethos of the course should make it an appealing option for graduates across Asia and beyond. “Through the programme, the students gain knowledge about all aspects of the microelectronics industry, and how it is likely to evolve in the future,” says Dai. “The knowledge and technical skills gained by the students offer them a competitive edge for building their future career, whether they want to find a job in industry or to continue their research studies.”

    The post Master’s programme takes microelectronics in new directions appeared first on Physics World.

    https://physicsworld.com/a/masters-programme-takes-microelectronics-in-new-directions/
    No Author

    Resonant laser ablation selectively destroys pancreatic tumours

    A mid-infrared femtosecond laser tuned to the collagen absorption peak can ablate pancreatic cancer while preserving healthy pancreatic tissues

    The post Resonant laser ablation selectively destroys pancreatic tumours appeared first on Physics World.

    Pancreatic ductal adenocarcinoma (PDAC), the most common type of pancreatic cancer, is an aggressive tumour with a poor prognosis. Surgery remains the only potential cure, but is feasible in just 10–15% of cases. A team headed up at Sichuan University in China has now developed a selective laser ablation technique designed to target PDAC while leaving healthy pancreatic tissue intact.

    Thermal ablation techniques, such as radiofrequency, microwave or laser ablation, could provide a treatment option for patients with locally advanced PDAC, but existing methods risk damaging surrounding blood vessels and healthy pancreatic tissues. The new approach, described in Optica, uses the molecular fingerprint of pancreatic tumours to enable selective ablation.

    The technique exploits the fact that PDAC tissue contains a large amount of collagen compared with healthy pancreatic tissue. Amide-I collagen fibres exhibit a strong absorption peak at 6.1 µm, thus the researchers surmised that tuning the treatment laser to this resonant wavelength could enable efficient tumour ablation with minimal collateral thermal damage. As such, they designed a femtosecond pulsed laser that can deliver 6.1 µm pulses with a power of more than 1 W.

    FTIR spectra of PDAC and the laser
    Resonant wavelength Fourier-transform infrared spectra of PDAC (blue) and the laser (red). (Courtesy: Houkun Liang, Sichuan University)

    “We developed a mid-infrared femtosecond laser system for the selective tissue ablation experiment,” says team leader Houkun Liang. “The system is tunable in the wavelength range of 5 to 11 µm, aligning with various molecular fingerprint absorption peaks such as amide proteins, cholesteryl ester, hydroxyapatite and so on.”

    Liang and colleagues first examined the ablation efficiency of three different laser wavelengths on two types of pancreatic cancer cells. Compared with non-resonant wavelengths of 1 and 3 µm, the collagen-resonant 6.1 µm laser was far more effective in killing pancreatic cancer cells, reducing cell viability to ranges of 0.27–0.32 and 0.37–0.38, at 0 and 24 h, respectively.

    The team observed similar results in experiments on ectopic PDAC tumours cultured on the backs of mice. Irradiation at 6.1 µm led to five to 10 times deeper tumour ablation than seen for the non-resonant wavelengths (despite using a laser power of 5 W for 1 µm ablation and just 500 mW for 6.1 and 3 µm), indicating that 6.1 µm is the optimal wavelength for PDAC ablation surgery.

    To validate the feasibility and safety of 6.1 µm laser irradiation, the team used the technique to treat PDAC tumours on live mice. Nine days after ablation, the tumour growth rate in treated mice was significantly suppressed, with an average tumour volume of 35.3 mm3. In contrast, tumour volume in a control group of untreated mice reached an average of 292.7 mm3, roughly eight times the size of the ablated tumours. No adverse symptoms were observed following the treatment.

    Clinical potential

    The researchers also used 6.1 µm laser irradiation to ablate pancreatic tissue samples (including normal tissue and PDAC) from 13 patients undergoing surgical resection. They used a laser power of 1 W and four scanning speeds (0.5, 1, 2 and 3 mm/s) with 10 ablation passes, examining 20 to 40 samples for each parameter.

    At the slower scanning speeds, excessive energy accumulation resulted in comparable ablation depths. At speeds of 2 or 3 mm/s, however, the average ablation depths in PDAC samples were 2.30 and 2.57 times greater than in normal pancreatic tissue, respectively, demonstrating the sought-after selective ablation. At 3 mm/s, for example, the ablation depth in tumour was 1659.09±405.97 µm, compared with 702.5±298.32 µm in normal pancreas.

    The findings show that by carefully controlling the laser power, scanning speed and number of passes, near-complete ablation of PDACs can be achieved, with minimal damage to surrounding healthy tissues.

    To further investigate the clinical potential of this technique, the researchers developed an anti-resonant hollow-core fibre (AR-HCF) that can deliver high-power 6.1 µm laser pulses deep inside the human body. The fibre has a core diameter of approximately 113 µm and low bending losses at radii under 10 cm. The researchers used the AR-HCF to perform 6.1 µm laser ablation of PDAC and normal pancreas samples. The ablation depth in PDAC was greater than in normal pancreas, confirming the selective ablation properties.

    “We are working together with a company to make a medical-grade fibre system to deliver the mid-infrared femtosecond laser. It consists of AR-HCF to transmit mid-infrared femtosecond pulses, a puncture needle and a fibre lens to focus the light and prevent liquid tissue getting into the fibre,” explains Liang. “We are also making efforts to integrate an imaging unit into the fibre delivery system, which will enable real-time monitoring and precise surgical guidance.”

    Next, the researchers aim to further optimize the laser parameters and delivery systems to improve ablation efficiency and stability. They also plan to explore the applicability of selective laser ablation to other tumour types with distinct molecular signatures, and to conduct larger-scale animal studies to verify long-term safety and therapeutic outcomes.

    “Before this technology can be used for clinical applications, highly comprehensive biological safety assessments are necessary,” Liang emphasizes. “Designing well-structured clinical trials to assess efficacy and risks, as well as navigating regulatory and ethical approvals, will be critical steps toward translation. There is a long way to go.”

    The post Resonant laser ablation selectively destroys pancreatic tumours appeared first on Physics World.

    https://physicsworld.com/a/resonant-laser-ablation-selectively-destroys-pancreatic-tumours/
    Tami Freeman

    Doorway states spotted in graphene-based materials

    Low-energy electron emission spectra depend on sample thickness

    The post Doorway states spotted in graphene-based materials appeared first on Physics World.

    Low-energy electrons escape from some materials via distinct “doorway” states, according to a study done by physicists at Austria’s Vienna Institute of Technology. The team studied graphene-based materials and found that the nature of the doorway states depended on the number of graphene layers in the sample.

    Low-energy electron (LEE) emission from solids is used across a range of materials analysis and processing applications including scanning electron microscopy and electron-beam induced deposition. However, the precise physics of the emission process is not well understood.

    Electrons are ejected from a material when a beam of electrons is fired at its surface. Some of these incident electrons will impart energy to electrons residing in the material, causing some resident electrons to be emitted from the surface. In the simplest model, the minimum energy needed for this LEE emission is the electron binding energy of the material.

    Frog in a box

    In this new study, however, researchers have shown that exceeding the binding energy is not enough for LEE emission from graphene-based materials. Not only does the electron need this minimum energy, it must also be in a specific doorway state or it is unlikely to escape. The team compare this phenomenon to the predicament of a frog in a cardboard box with a window. Not only must the frog hop a certain height to escape the box, it must also begin its hop from a position that will result in it travelling through the hole (see figure).

    For most materials, the energy spectrum of LEE electrons is featureless. However, it was known that graphite’s spectrum has an “X state” at about 3.3 eV, where emission is enhanced. This state could be related to doorway states.

    To search for doorway states, the Vienna team studied LEE emission from graphite as well as from single-layer and bi-layer graphene. Graphene is a sheet of carbon just one atom thick. Sheets can stick together via the relatively weak Van der Waals force to create multilayer graphene – and ultimately graphite, which comprises a large number of layers.

    Because electrons are mostly confined within the graphene layers, the electronic states of single-layer, bi-layer and multi-layer graphene are broadly similar. As a result, it was expected that these materials would have similar LEE emission spectra . However, the Vienna team found a surprising difference.

    Emission and reflection

    The team made their discovery by firing a beam of relatively low energy electrons (173 eV) incident at 60° to the surface of single-layer and bi-layer graphene as well as graphite. The scattered electrons are then detected at the same angle of reflection. Meanwhile, a second detector is pointed normal to the surface to capture any emitted electrons. In quantum mechanics electrons are indistinguishable, so the modifiers scattered and emitted are illustrative, rather than precise.

    The team looked for coincident signals in both detectors and plotted their results as a function of energy in 2D “heat maps”. These plots revealed that bi-layer graphene and graphite each had doorway states – but at different energies. However, single-layer graphene did not appear to have any doorway states. By combining experiments with calculations, the team showed that doorway states emerge above a certain number of layers. As a result the researchers showed that graphite’s X state can be attributed in part to a doorway state that appears at about five layers of graphene.

    “For the first time, we’ve shown that the shape of the electron spectrum depends not only on the material itself, but crucially on whether and where such resonant doorway states exist,” explains Anna Niggas at the Vienna Institute of Technology.

    As well as providing important insights in how the electronic properties of graphene morph into the properties of graphite, the team says that their research could also shed light on the properties of other layered materials.

    The research is described in Physical Review Letters.

    The post Doorway states spotted in graphene-based materials appeared first on Physics World.

    https://physicsworld.com/a/doorway-states-spotted-in-graphene-based-materials/
    Hamish Johnston

    NASA’s Jet Propulsion Lab lays off a further 10% of staff

    The California-based lab has now lost almost a third of staff since the start of 2024

    The post NASA’s Jet Propulsion Lab lays off a further 10% of staff appeared first on Physics World.

    NASA’s Jet Propulsion Laboratory (JPL) is to lay off some 550 employees as part of a restructuring that began in July. The action affects about 11% of JPL’s employees and represents the lab’s third downsizing in the past 20 months. When the layoffs are complete by the end of the year, the lab will have roughly 4500 employees, down from about 6500 at the start of 2024. A further 4000 employees have already left NASA during the past six months via sacking, retirement or voluntary buyouts.

    Managed by the California Institute of Technology in Pasadena, JPL oversees scientific missions such as the Psyche asteroid probe, the Europa Clipper and the Perseverance rover on Mars. The lab also operates the Deep Space Network that keeps Earth in communication with unmanned space missions. JPL bosses already laid off about 530 staff – and 140 contractors – in February last year followed by another 325 people in November 2024.

    JPL director Dave Gallagher insists, however, that the new layoffs are not related to the current US government shutdown that began on 1 October. “[They are] essential to securing JPL’s future by creating a leaner infrastructure, focusing on our core technical capabilities, maintaining fiscal discipline, and positioning us to compete in the evolving space ecosystem,” he says in a message to employees.

    Judy Chu, Democratic Congresswoman for the constituency that includes JPL, is less optimistic. “Every layoff devastates the highly skilled and uniquely talented workforce that has made these accomplishments possible,” she says. “Together with last year’s layoffs, this will result in an untold loss of scientific knowledge and expertise that threatens the very future of American leadership in space exploration and scientific discovery.”

    John Logsdon, professor emeritus at George Washington University and founder of the university’s Space Policy Institute, says that the cuts are a direct result of the Trump administration’s approach to science and technology. “The administration gives low priority to robotic science and exploration, and has made draconic cuts to the science budget; that budget supports JPL’s work,” he told Physics World. “With these cuts, there is not enough money to support a JPL workforce sized for more ambitious activities. Ergo, staff cuts.”

    The post NASA’s Jet Propulsion Lab lays off a further 10% of staff appeared first on Physics World.

    https://physicsworld.com/a/nasas-jet-propulsion-lab-lays-off-a-further-10-of-staff/
    No Author

    How to solve the ‘future of physics’ problem

    Neil Downie believes that students need more support to devise, build and test their own projects

    The post How to solve the ‘future of physics’ problem appeared first on Physics World.

    I hugely enjoyed physics when I was a youngster. I had the opportunity both at home and school to create my own projects, which saw me make electronic circuits, crazy flying models like delta-wings and autogiros, and even a gas chromatograph with a home-made chart recorder. Eventually, this experience made me good enough to repair TV sets, and work in an R&D lab in the holidays devising new electronic flow controls.

    That enjoyment continued beyond school. I ended up doing a physics degree at the University of Oxford before working on the discovery of the gluon at the DESY lab in Hamburg for my PhD. Since then I have used physics in industry – first with British Oxygen/Linde and later with Air Products & Chemicals – to solve all sorts of different problems, build innovative devices and file patents.

    While some students have a similarly positive school experience and subsequent career path, not enough do. Quite simply, physics at school is the key to so many important, useful developments, both within and beyond physics. But we have a physics education problem, or to put it another way – a “future of physics” problem.

    There are just not enough school students enjoying and learning physics. On top of that there are not enough teachers enjoying physics and not enough students doing practical physics. The education problem is bad for physics and for many other subjects that draw on physics. Alas, it’s not a new problem but one that has been developing for years.

    Problem solving

    Many good points about the future of physics learning were made by the Institute of Physics in its 2024 report Fundamentals of 11 to 19 Physics. The report called for more physics lessons to have a practical element and encouraged more 16-year-old students in England, Wales and Northern Ireland to take AS-level physics at 17 so that they carry their GCSE learning at least one step further.

    Doing so would furnish students who are aiming to study another science or a technical subject with the necessary skills and give them the option to take physics A-level. Another recommendation is to link physics more closely to T-levels – two-year vocational courses in England for 16–19 year olds that are equivalent to A-levels – so that students following that path get a background in key aspects of physics, for example in engineering, construction, design and health.

    But do all these suggestions solve the problem? I don’t think they are enough and we need to go further. The key change to fix the problem, I believe, is to have student groups invent, build and test their own projects. Ideally this should happen before GCSE level so that students have the enthusiasm and background knowledge to carry them happily forward into A-level physics. They will benefit from “pull learning” – pulling in knowledge and active learning that they will remember for life. And they will acquire wider life skills too.

    Developing skillsets

    During my time in industry, I did outreach work with schools every few weeks and gave talks with demonstrations at the Royal Institution and the Franklin Institute. For many years I also ran a Saturday Science club in Guildford, Surrey, for pupils aged 8–15.

    Based on this, I wrote four Saturday Science books about the many playful and original demonstrations and projects that came out of it. Then at the University of Surrey, as a visiting professor, I had small teams of final-year students who devised extraordinary engineering – designing superguns for space launches, 3D printers for full-size buildings and volcanic power plants inter alia. A bonus was that other staff working with the students got more adventurous too.

    But that was working with students already committed to a scientific path. So lately I’ve been working with teachers to get students to devise and build their own innovative projects. We’ve had 14–15-year-old state-school students in groups of three or four, brainstorming projects, sketching possible designs, and gathering background information. We help them and get A-level students to help too (who gain teaching experience in the process). Students not only learn physics better but also pick up important life skills like brainstorming, team-working, practical work, analysis and presentations.

    We’ve seen lots of ingenuity and some great projects such as an ultrasonic scanner to sense wetness of cloth; a system to teach guitar by lighting up LEDs along the guitar neck; and measuring breathing using light passing through a band of Lycra around the patient below the ribs. We’ve seen the value of failure, both mistakes and genuine technical problems.

    Best of all, we’ve also noticed what might be dubbed the “combination bonus” – students having to think about how they combine their knowledge of one area of physics with another.  A project involving a sensor, for example, will often involve electronics as well the physics of the sensor and so student knowledge of both areas is enhanced.

    Some teachers may question how you mark such projects. The answer is don’t mark them! Project work and especially group work is difficult to mark fairly and accurately, and the enthusiasm and increased learning by students working on innovative projects will feed through into standard school exam results.

    Not trying to grade such projects will mean more students go on to study physics further, potentially to do a physics-related extended project qualification – equivalent to half an A-level where students research a topic to university level – and do it well. Long term, more students will take physics with them into the world of work, from physics to engineering or medicine, from research to design or teaching.

    Such projects are often fun for students and teachers. Teachers are often intrigued and amazed by students’ ideas and ingenuity. So, let’s choose to do student-invented project work at school and let’s finally solve the future of physics problem.

    The post How to solve the ‘future of physics’ problem appeared first on Physics World.

    https://physicsworld.com/a/how-to-solve-the-future-of-physics-problem/
    No Author

    A recipe for quantum chaos

    New research has emerged proposing a way to consistently generate quantum chaos, a key ingredient in controlling quantum systems

    The post A recipe for quantum chaos appeared first on Physics World.

    The control of large, strongly coupled, multi-component quantum systems with complex dynamics is a challenging task.

    It is, however, an essential prerequisite for the design of quantum computing platforms and for the benchmarking of quantum simulators.

    A key concept here is that of quantum ergodicity. This is because quantum ergodic dynamics can be harnessed to generate highly entangled quantum states.

    In classical statistical mechanics, an ergodic system evolving over time will explore all possible microstates states uniformly. Mathematically, this means that a sufficiently large collection of random samples from an ergodic process can represent the average statistical properties of the entire process.

    Quantum ergodicity is simply the extension of this concept to the quantum realm.

    Closely related to this is the idea of chaos. A chaotic system is one in which is very sensitive to its initial conditions. Small changes can be amplified over time, causing large changes in the future.

    The ideas of chaos and ergodicity are intrinsically linked as chaotic dynamics often enable ergodicity.

    Until now, it has been very challenging to predict which experimentally preparable initial states will trigger quantum chaos and ergodic dynamics over a reasonable time scale.

    In a new paper published in Reports on Progress in Physics, a team of researchers have proposed an ingenious solution to this problem using the Bose–Hubbard Hamiltonian.

    They took as an example ultracold atoms in an optical lattice (a typical choice for experiments in this field) to benchmark their method.

    The results show that there are certain tangible threshold values which must be crossed in order to ensure the onset of quantum chaos.

    These results will be invaluable for experimentalists working across a wide range of quantum sciences.

    Read the full article

    How to seed ergodic dynamics of interacting bosons under conditions of many-body quantum chaos – IOPscience

    Pausch et al. 2025 Rep. Prog. Phys. 88 057602

    The post A recipe for quantum chaos appeared first on Physics World.

    https://physicsworld.com/a/a-recipe-for-quantum-chaos/
    Paul Mabey

    Neural simulation-based inference techniques at the LHC

    Researchers from the ATLAS collaboration have introduced a new neural simulation based-inference technique to analyse their datasets

    The post Neural simulation-based inference techniques at the LHC appeared first on Physics World.

    Precision measurements of theoretical parameters are a core element of the scientific program of experiments at the Large Hadron Collider (LHC) as well as other particle colliders. 

    These are often performed using statistical techniques such as the method of maximum likelihood. However, given the size of datasets generated, reduction techniques, such as grouping data into bins, are often necessary. 

    These can lead to a loss of sensitivity, particularly in non-linear cases like off-shell Higgs boson production and effective field theory measurements.  The non-linearity in these cases comes from quantum interference and traditional methods are unable to optimally distinguish the signal from background.

    In this paper, the ATLAS collaboration pioneered the use of a neural network based technique called neural simulation-based inference (NSBI) to combat these issues. 

    A neural network is a machine learning model originally inspired by how the human brain works. It’s made up of layers of interconnected units called neurons, which process information and learn patterns from data. Each neuron receives input, performs a simple calculation, and passes the result to other neurons. 

    NSBI uses these neural networks to analyse each particle collision event individually, preserving more information and improving accuracy.

    The framework developed here can handle many sources of uncertainty and includes tools to measure how confident scientists can be in their results.

    The researchers benchmarked their method by using it to calculate the Higgs boson signal strength and compared it to previous methods with impressive results (see here for more details about this).

    The greatly improved sensitivity gained from using this method will be invaluable in the search for physics beyond the Standard Model in future experiments at ATLAS and beyond.

    Read the full article

    An implementation of neural simulation-based inference for parameter estimation in ATLAS – IOPscience

    The ATLAS Collaboration, 2025 Rep. Prog. Phys. 88 067801

    The post Neural simulation-based inference techniques at the LHC appeared first on Physics World.

    https://physicsworld.com/a/neural-simulation-based-inference-techniques-at-the-lhc/
    Paul Mabey

    Chip-integrated nanoantenna efficiently harvests light from diamond defects

    Nearly all light emitted by nitrogen-vacancy centres can be collected, providing a boost for these room-temperature quantum technology platforms

    The post Chip-integrated nanoantenna efficiently harvests light from diamond defects appeared first on Physics World.

    When diamond defects emit light, how much of that light can be captured and used for quantum technology applications? According to researchers at the Hebrew University of Jerusalem, Israel and Humboldt Universität of Berlin, Germany, the answer is “nearly all of it”. Their technique, which relies on positioning a nanoscale diamond at an optimal location within a chip-integrated nanoantenna, could lead to improvements in quantum communication and quantum sensing.

    Illustration showing photon emission from a nanodiamond being directed by a bullseye antenna. The bullseye antenna is shown flat, and seven parallel orange arrows representing photons emerge from different parts of the bullseye, like candles on a birthday cake. At the centre of the bullseye is a diamond
    Guided light: Illustration showing photon emission from a nanodiamond and light directed by a bullseye antenna. (Courtesy: Boaz Lubotzky)

    Nitrogen-vacancy (NV) centres are point defects that occur when one carbon atom in diamond’s lattice structure is replaced by a nitrogen atom next to an empty lattice site (a vacancy). Together, this nitrogen atom and its adjacent vacancy behave like a negatively charged entity with an intrinsic quantum spin.

    When excited with laser light, an electron in an NV centre can be promoted into an excited state. As the electron decays back to the ground state, it emits light. The exact absorption-and-emission process is complicated by the fact that both the ground state and the excited state of the NV centre have three sublevels (spin triplet states). However, by exciting an individual NV centre repeatedly and collecting the photons it emits, it is possible to determine the spin state of the centre.

    The problem, explains Boaz Lubotzky, who co-led this research effort together with his colleague Ronen Rapaport, is that NV centres radiate over a wide range of angles. Hence, without an efficient collection interface, much of the light they emit is lost.

    Standard optics capture around 80% of the light

    Lubotzky and colleagues say they have now solved this problem thanks to a hybrid nanostructure made from a PMMA dielectric layer above a silver grating. This grating is arranged in a precise bullseye pattern that accurately guides light in a well-defined direction thanks to constructive interference. Using a nanometre-accurate positioning technique, the researchers placed the nanodiamond containing the NV centres exactly at the optimal location for light collection: right at the centre of the bullseye.

    For standard optics with a numerical aperture (NA) of about 0.5, the team found that the system captures around 80% of the light emitted from the NV centres. When NA >0.7, this value exceeds 90%, while for NA > 0.8, Lubotzky says it approaches unity.

    “The device provides a chip-based, room-temperature interface that makes NV emission far more directional, so a larger fraction of photons can be captured by standard lenses or coupled into fibres and photonic chips,” he tells Physics World. “Collecting more photons translates into faster measurements, higher sensitivity and lower power, thereby turning NV centres into compact precision sensors and also into brighter, easier-to-use single-photon sources for secure quantum communication.”

    The researchers say their next priority is to transition their prototype into a plug-and-play, room-temperature module – one that is fully packaged and directly coupled to fibres or photonic chips – with wafer-level deterministic placement for arrays. “In parallel, we will be leveraging the enhanced collection for NV-based magnetometry, aiming for faster, lower-power measurements with improved readout fidelity,” says Lubotzky. “This is important because it will allow us to avoid repeated averaging and enable fast, reliable operation in quantum sensors and processors.”

    They detail their present work in APL Quantum.

    The post Chip-integrated nanoantenna efficiently harvests light from diamond defects appeared first on Physics World.

    https://physicsworld.com/a/chip-integrated-nanoantenna-efficiently-harvests-light-from-diamond-defects/
    Isabelle Dumé

    Illuminating quantum worlds: a Diwali conversation with Rupamanjari Ghosh

    The 2025 Homi Bhabha lecturer Rupamanjari Ghosh is a leading expert in quantum optics and a transformative figure in higher education and science policy in India

    The post Illuminating quantum worlds: a Diwali conversation with Rupamanjari Ghosh appeared first on Physics World.

    Homes and cities around the world are this week celebrating Diwali or Deepavali – the Indian “festival of lights”. For Indian physicist Rupamanjari Ghosh, who is the former vice chancellor of Shiv Nadar University Delhi-NCR, this festival sheds light on the quantum world. Known for her work on nonlinear optics and entangled photons, Ghosh finds a deep resonance between the symbolism of Diwali and the ongoing revolution in quantum science.

    “Diwali comes from Deepavali, meaning a ‘row of lights’. It marks the triumph of light over dark; good over evil; and knowledge over ignorance,” Ghosh explains. “In science too, every discovery is a Diwali –  a victory of knowledge over ignorance.”

    With 2025 being marked by the International Year of Quantum Science and Technology, a victory of knowledge over ignorance couldn’t ring truer. “It has taken us a hundred years since the birth of quantum mechanics to arrive at this point, where quantum technologies are poised to transform our lives,” says Ghosh.

    Ghosh has another reason to celebrate, having been named as this year’s Institute of Physics (IOP) Homi Bhabha lecturer. The IOP and the Indian Physical Association (IPA) jointly host the Homi Bhabha and Cockcroft Walton bilateral exchange of lecturers. Running since 1998, these international programmes aim to promote dialogue on global challenges through physics and provide physicists with invaluable opportunities for global exposure and professional growth. Ghosh’s online lecture, entitled “Illuminating quantum frontiers: from photons to emerging technologies”, will be aired at 3 p.m. GMT on Wednesday 22 October.

    From quantum twins to quantum networks

    Ghosh’s career in physics took off in the mid-1980s, when she and American physicist Leonard Mandel – who is often referred to as one of the founding fathers of quantum optics – demonstrated a new quantum source of twin photons through spontaneous parametric down-conversion: a process where a high-energy photon splits into two lower-energy, correlated photons (Phys. Rev. Lett. 59, 1903).

    “Before that,” she recalls, “no-one was looking for quantum effects in this nonlinear optical process. The correlations between the photons defied classical explanation. It was an elegant early verification of quantum nonlocality.”

    Those entangled photon pairs are now the building blocks of quantum communication and computation. “We’re living through another Diwali of light,” she says, “where theoretical understanding and experimental innovation illuminate each other.”

    Entangled light

    During Diwali, lamps unite households in a shimmering network of connection,  and so too does entanglement of photons. “Quantum entanglement reminds us that connection transcends locality,” Ghosh says. “In the same way, the lights of Diwali connect us across borders and cultures through shared histories.”

    Her own research extends that metaphor further. Ghosh’s team has worked on mapping quantum states of light onto collective atomic excitations. These “slow-light” techniques –  using electromagnetically induced transparency or Raman interactions –  allow photons to be stored and retrieved, forming the backbone of long-distance quantum communication (Phys. Rev. A. 88 023852, EPL 105 44002)

    “Symbolically,” she adds, “it’s like passing the flame from one diya (lamp) to another. We’re not just spreading light –  we’re preserving, encoding and transmitting it. Success comes through connection and collaboration.”

    Rupamanjari Ghosh
    Beyond the shadows: Ghosh calls for the bright light of inclusivity in science. (Courtesy: Rupamanjari Ghosh)

    The dark side of light

    Ghosh is quick to note that in quantum physics, “darkness” is far from empty. “In quantum optics, even the vacuum is rich –  with fluctuations that are essential to our understanding of the universe.”

    Her group studies the transition from quantum to classical systems, using techniques such as error correction, shielding and coherence-preserving materials. “Decoherence –  the loss of quantum behaviour through environmental interaction –  is a constant threat. To build reliable quantum technologies, we must engineer around this fragility,” Ghosh explains.

    There are also human-engineered shadows: some weaknesses in quantum communication devices aren’t due to the science itself – they come from mistakes or flaws in how humans built them. Hackers can exploit these “side channels” to get around security. “Security,” she warns, “is only as strong as the weakest engineering link.”

    Beyond the lab, Ghosh finds poetic meaning in these challenges. “Decoherence isn’t just a technical problem –  it helps us understand the arrows of time, why the universe evolves irreversibly. The dark side has its own lessons.”

    Lighting every corner

    For Ghosh, Diwali’s illumination is also a call for inclusivity in science. “No corner should remain dark,” she says. “Science thrives on diversity. Diverse teams ask broader questions and imagine richer answers. It’s not just morally right – it’s good for science.”

    She argues that equity is not sameness but recognition of uniqueness. “Innovation doesn’t come from conformity. Gender diversity, for example, brings varied cognitive and collaborative styles – essential in a field like quantum science, where intuition is constantly stretched.”

    The shadows she worries most about are not in the lab, but in academia itself. “Unconscious biases in mentorship or gatekeeping in opportunity can accumulate to limit visibility. Institutions must name and dismantle these hidden shadows through structural and cultural change.”

    Her vision of inclusion extends beyond gender. “We shouldn’t think of work and life as opposing realms to ‘balance’,” she says. “It’s about creating harmony among all dimensions of life – work, family, learning, rejuvenation. That’s where true brilliance comes from.”

    As the rows of diyas are lit this Diwali, Ghosh’s reflections remind us that light –  whether classical or quantum –  is both a physical and moral force: it connects, illuminates and endures. “Each advance in quantum science,” she concludes, “is another step in the age-old journey from darkness to light.”

    This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

    Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

    Find out more on our quantum channel.

    The post Illuminating quantum worlds: a Diwali conversation with Rupamanjari Ghosh appeared first on Physics World.

    https://physicsworld.com/a/illuminating-quantum-worlds-a-diwali-conversation-with-rupamanjari-ghosh/
    No Author

    Influential theoretical physicist and Nobel laureate Chen-Ning Yang dies aged 103

    Yang is famed for his work on parity violation

    The post Influential theoretical physicist and Nobel laureate Chen-Ning Yang dies aged 103 appeared first on Physics World.

    The Chinese particle physicist Chen-Ning Yang died on 18 October at the age of 103. Yang shared half of the 1957 Nobel Prize for Physics with Tsung-Dao Lee for their theoretical work that overturned the notion that parity is conserved in the weak force – one of the four fundamental forces of nature.

    Born on 22 September 1922 in Hefei, China, Yang competed a BSc at the National Southwest Associated University in Kunming in 1942. After finishing an MSc in statistical physics at Tsinghua University two years later, in 1945 he moved to the University of Chicago in the US as part of a government-sponsored programme. He received his PhD in physics in 1948 working under the guidance of Edward Teller.

    In 1949 Yang moved to the Institute for Advanced Study in Princeton, where he made pioneering contributions to quantum field theory, working together with Robert Mills. In 1953 they proposed the Yang-Mills theory, which became a cornerstone of the Standard Model of particle physics.

    The ‘Wu experiment’

    It was also at Princeton where Yang began a fruitful collaboration with Lee, who died last year aged 97. Their work on parity – a property of elementary particles that expresses their behaviour upon reflection in a mirror – led to the duo winning the Nobel prize.

    In the early 1950s, physicists had been puzzled by the decays of two subatomic particles, known as tau and theta, which are identical except that the tau decays into three pions with a net parity of -1, while a theta particle decays into two pions with a net parity of +1.

    There were two possible explanations: either the tau and theta are different particles or that parity in the weak interaction is not conserved with Yang and Lee proposing various ways to test their ideas (Phys. Rev. 104 254).

    This “parity violation” was later proved experimentally by, among others, Chien-Shiung Wu at Columbia University. She carried out an experiment based on the radioactive decay of unstable cobalt-60 nuclei into nickel-60 – what became known as the “Wu experiment”. For their work, Yang, who was 35 at the time, shared the 1957 Nobel Prize for Physics with Lee.

    Influential physicist

    In 1965 Yang moved to Stony Brook University, becoming the first director of the newly founded Institute for Theoretical Physics, which is now known as the C N Yang Institute for Theoretical Physics. During this time he also contributed to advancing science and education in China, setting up the Committee on Educational Exchange with China – a programme that has sponsored some 100 Chinese scholars to study in the US.

    In 1997, Yang returned to Beijing where he became an honorary director of the Centre for Advanced Study at Tsinghua University. He then retired from Stony Brook in 1999, becoming a professor at Tsinghua University. During his time in the US, Yang obtained US citizenship, but renounced it in 2015.

    More recently, Yang was involved in debates over whether China should build the Circular Electron Positron Collider (CEPC) – a huge 100 km circumference underground collider that would study the Higgs boson in unprecedented detail and be a successor to CERN’s Large Hadron Collider. Yang took a sceptical view calling it “inappropriate” for a developing country that is still struggling with “more acute issues like economic development and environment protection”.

    Yang also expressed concern that the science performed on the CEPC is just “guess” work and without guaranteed results. “I am not against the future of high-energy physics, but the timing is really bad for China to build such a super collider,” he noted in 2016. “Even if they see something with the machine, it’s not going to benefit the life of Chinese people any sooner.”

    Lasting legacy

    As well as the Nobel prize, Yang won many other awards such as the US National Medal of Science in 1986, the Einstein Medal in 1995, which is presented by the Albert Einstein Society in Bern, and the American Physical Society’s Lars Onsager Prize in 1990.

    “The world has lost one of the most influential physicists of the modern era,” noted Stony Brook president Andrea Goldsmith in a statement. “His legacy will continue through his transformational impact on the field of physics and through the many colleagues and students influenced by his teaching, scholarship and mentorship.”

    The post Influential theoretical physicist and Nobel laureate Chen-Ning Yang dies aged 103 appeared first on Physics World.

    https://physicsworld.com/a/influential-theoretical-physicist-and-nobel-laureate-chen-ning-yang-dies-aged-103/
    Michael Banks

    ‘Science needs all perspectives – male, female and everything in-between’: Brazilian astronomer Thaisa Storchi Bergmann

    Meghie Rodrigues talks to Brazilian astronomer Thaisa Storchi Bergmann about curiosity, black holes and current challenges in research funding

    The post ‘Science needs all perspectives – male, female and everything in-between’: Brazilian astronomer Thaisa Storchi Bergmann appeared first on Physics World.

    As a teenager in her native Rio Grande do Sul, a state in Southern Brazil, Thaisa Storchi Bergmann enjoyed experimenting in an improvised laboratory her parents built in their attic. They didn’t come from a science background – her father was an accountant, her mother a primary school teacher – but they encouraged her to do what she enjoyed. With a friend from school, Storchi Bergmann spent hours looking at insects with a microscope and running experiments from a chemistry toy kit. “We christened the lab Thasi-Cruz after a combination of our names,” she chuckles.

    At the time, Storchi Bergmann could not have imagined that one day this path would lead to cosmic discoveries and international recognition at the frontiers of astrophysics. “I always had the curiosity inside me,” she recalls. “It was something I carried since adolescence.”

    That curiosity almost got lost to another discipline. By the time Storchi Bergmann was about to enter university, she was swayed by a cousin living with her family who was passionate about architecture. By 1974 she began studying architecture at the Federal University of Rio Grande do Sul (UFRGS). “But I didn’t really like technical drawing. My favourite part of the course were physics classes,” she says. Within a semester, she switched to physics.

    There she met Edemundo da Rocha Vieira, the first astrophysicist UFRGS ever hired – who later went on to structure the university’s astronomy department. He nurtured Storchi Bergmann’s growing fascination with the universe and introduced her to research.

    In 1977, newly married after graduation, Storchi Bergmann followed her husband to Rio de Janeiro, where she did a master’s degree and worked with William Kunkel, an American astronomer who was in Rio to help establish Brazil’s National Astrophysics Laboratory. She began working on data from a photometric system to measure star radiation. “But Kunkel said galaxies were a lot more interesting to study, and that stuck in my head,” she says.

    Three years after moving to Rio, she returned to Porto Alegre, in Rio Grande do Sul, to start her doctoral research and teach at UFRGS. Vital to her career was her decision to join the group of Miriani Pastoriza, one of the pioneers of extragalactic astrophysics in Latin America. “She came from Argentina, where [in the late 1970s and early 1980s] scientists were being strongly persecuted [by the country’s military dictatorship] at the time,” she recalls. Pastoriza studied galaxies with “peculiar nuclei” – objects later known to harbour supermassive black holes. Under Pastoriza’s guidance, she moved from stars to galaxies, laying the foundation for her career.

    Between 1986 and 1987, Storchi Bergmann often travelled to Chile to make observations and gather data for her PhD, using some of the largest telescopes available at the time. Then came a transformative period – a postdoc fellowship in Maryland, US, just as the Hubble Space Telescope was launched in 1990. “Each Thursday, I would drive to Baltimore for informal bag-lunch talks at the Space Telescope Science Institute, absorbing new results on active galactic nuclei (AGN) and supermassive black holes,” Storchi Bergmann recalls.

    Discoveries and insights

    In 1991, during an observing campaign, she and a collaborator saw something extraordinary in the galaxy NGC 1097: gas moving at immense speeds, captured by the galaxy’s central black hole. The work, published in 1993, became one of the earliest documented cases of what are now called “tidal disruption events”, in which a star or cloud gets too close to a black hole and is torn apart.

    Her research also contributed to one of the defining insights of the Hubble era: that every massive galaxy hosts a central black hole. “At first, we didn’t know if they were rare,” she explains. “But gradually it became clear: these objects are fundamental to galaxy evolution.”

    Another collaboration brought her into contact with Daniela Calzetti, whose work on the effects of interstellar dust led to the formulation of the widely used “Calzetti law”. These and other contributions placed Storchi Bergmann among the most cited scientists worldwide, recognition of which came in 2015 when she received the L’Oréal-UNESCO Award for Women in Science.

    Her scientific achievements, however, unfolded against personal and structural obstacles. As a young mother, she often brought her baby to observatories and conferences so she could breastfeed. This kind of juggling is no stranger to many women in science.

    “It was never easy,” Storchi Bergmann reflects. “I was always running, trying to do 20 things at once.” The lack of childcare infrastructure in universities compounded the challenge. She recalls colleagues who succeeded by giving up on family life altogether. “That is not sustainable,” she insists. “Science needs all perspectives – male, female and everything in-between. Otherwise, we lose richness in our vision of the universe.”

    When she attended conferences early in her career, she was often the only woman in the room. Today, she says, the situation has greatly improved, even if true equality remains distant.

    Now a tenured professor at UFRGS and a member of the Brazilian Academy of Sciences, Storchi Bergmann continues to push at the cosmic frontier. Her current focus is the Legacy Survey of Space and Time (LSST), about to begin at the Vera Rubin Observatory in Chile.

    Her group is part of the AGN science collaboration, developing methods to analyse the characteristic flickering of accreting black holes. With students, she is experimenting with automated pipelines and artificial intelligence to make sense of and manage the massive amounts of data.

    Challenges ahead

    Yet this frontier science is not guaranteed. Storchi Bergmann is frustrated by the recent collapse in research scholarships. Historically, her postgraduate programme enjoyed a strong balance of grants from both of Brazil’s federal research funding agencies, CNPq (from the Ministry of Science) and CAPES (from the Ministry of Education). But cuts at CNPq, she says, have left students without support, and CAPES has not filled the gap.

    “The result is heartbreaking,” she says. “I have brilliant students ready to start, including one from Piauí (a state in north-eastern Brazil), but without a grant, they simply cannot continue. Others are forced to work elsewhere to support themselves, leaving no time for research.”

    She is especially critical of the policy of redistributing scarce funds away from top-rated programmes to newer ones without expanding the overall budget. “You cannot build excellence by dismantling what already exists,” she argues.

    For her, the consequences go beyond personal frustration. They risk undermining decades of investment that placed Brazil on the international astrophysics map. Despite these challenges, Storchi Bergmann remains driven and continues to mentor master’s and PhD students, determined to prepare them for the LSST era.

    At the heart of her research is a question as grand as any in cosmology: which came first – the galaxy or its central black hole? The answer, she believes, will reshape our understanding of how the universe came to be. And it will carry with it the fingerprint of her work: the persistence of a Brazilian scientist who followed her curiosity from a home-made lab to the centres of galaxies, overcoming obstacles along the way.

    The post ‘Science needs all perspectives – male, female and everything in-between’: Brazilian astronomer Thaisa Storchi Bergmann appeared first on Physics World.

    https://physicsworld.com/a/science-needs-all-perspectives-male-female-and-everything-in-between-brazilian-astronomer-thaisa-storchi-bergmann/
    No Author

    Precision sensing experiment manipulates Heisenberg’s uncertainty principle

    Though the principle is not violated, a new way of fudging its restrictions could lead to improvements in ultra-precise sensing, say physicists

    The post Precision sensing experiment manipulates Heisenberg’s uncertainty principle appeared first on Physics World.

    Physicists in Australia and the UK have found a new way to manipulate Heisenberg’s uncertainty principle in experiments on the vibrational mode of a trapped ion. Although still at the laboratory stage, the work, which uses tools developed for error correction in quantum computing, could lead to improvements in ultra-precise sensor technologies like those used in navigation, medicine and even astronomy.

    “Heisenberg’s principle says that if two operators – for example, position x and momentum, p – do not commute, then one cannot simultaneously measure both of them to absolute precision,” explains team leader Ting Rei Tan of the University of Sydney’s Nano Institute. “Our result shows that one can instead construct new operators – namely ‘modular position’ x̂ and ‘modular momentum’ p̂. These operators can be made to commute, meaning that we can circumvent the usual limitation imposed by the uncertainty principle.”

    The modular measurements, he says, give the true measurement of displacements in position and momentum of the particle if the distance is less than a specific length l, known as the modular length. In the new work, they measured x̂ = x mod lx and p̂ = p mod lp, where lx and lp are the modular length in position and momentum.

    “Since the two modular operators x̂ and p̂ commute, this means that they are now bounded by an uncertainty principle where the product is larger or equal to 0 (instead of the usual ℏ/2),” adds team member Christophe Valahu. “This is how we can use them to sense position and momentum below the standard quantum limit. The catch, however, is that this scheme only works if the signal being measured is within the sensing range defined by the modular lengths.”

    The researchers stress that Heisenberg’s uncertainty principle is in no way “broken” by this approach, but it does mean that when observables associated with these new operators are measured, the precision of these measurements is not limited by this principle. “What we did was to simply push the uncertainty to a sensing range that is relatively unimportant for our measurement to obtain a better precision at finer details,” Valahu tells Physics World.

    This concept, Tan explains, is related to an older method known as quantum squeezing that also works by shifting uncertainties around. The difference is that in squeezing, one reshapes the probability, reducing the spread in position at the cost of enlarging the spread of momentum, or vice versa. “In our scheme, we instead redistribute the probability, reducing the uncertainties of position and momentum within a defined sensing range, at the cost of an increased uncertainty if the signal is not guaranteed to lie within this range,” Tan explains. “We effectively push the unavoidable quantum uncertainty to places we don’t care about (that is, big, coarse jumps in position and momentum) so the fine details we do care about can be measured more precisely.

    “Thus, as long as we know the signal is small (which is almost always the case for precision measurements), modular measurements give us the correct answer.”

    Repurposed ideas and techniques

    The particle being measured in Tan and colleagues’ experiment was a 171Yb+ ion trapped in a so-called grid state, which is a subclass of error-correctable logical state for quantum bits, or qubits. The researchers then used a quantum phase estimation protocol to measure the signal they imprinted onto this state, which acts as a sensor.

    This measurement scheme is similar to one that is commonly used to measure small errors in the logical qubit state of a quantum computer. “The difference is that in this case, the ‘error’ corresponds to a signal that we want to estimate, which displaces the ion in position and momentum,” says Tan. “This idea was first proposed in a theoretical study.”

    Towards ultra-precise quantum sensors

    The Sydney researchers hope their result will motivate the development of next-generation precision quantum sensors. Being able to detect extremely small changes is important for many applications of quantum sensing, including navigating environments where GPS isn’t effective (such as on submarines, underground or in space). It could also be useful for biological and medical imaging, materials analysis and gravitational systems.

    Their immediate goal, however, is to further improve the sensitivity of their sensor, which is currently about 14 x10-24 N/Hz1/2, and calculate its limit. “It would be interesting if we could push that to the 10-27 N level (which, admittedly, will not be easy) since this level of sensitivity could be relevant in areas like the search for dark matter,” Tan says.

    Another direction for future research, he adds, is to extend the scheme to other pairs of observables. “Indeed, we have already taken some steps towards this: in the latter part of our present study, which is published in Science Advances, we constructed a modular number operator and a modular phase operator to demonstrate that the strategy can be extended beyond position and momentum.”

    The post Precision sensing experiment manipulates Heisenberg’s uncertainty principle appeared first on Physics World.

    https://physicsworld.com/a/precision-sensing-experiment-manipulates-heisenbergs-uncertainty-principle/
    Isabelle Dumé

    Eye implant restores vision to patients with incurable sight loss

    A tiny wireless implant allows people with sight loss due to age-related macular degeneration to read again

    The post Eye implant restores vision to patients with incurable sight loss appeared first on Physics World.

    A tiny wireless implant inserted under the retina can restore central vision to patients with sight loss due to age-related macular degeneration (AMD). In an international clinical trial, the PRIMA (photovoltaic retina implant microarray) system restored the ability to read in 27 of 32 participants followed up after a year.

    AMD is the most common cause of incurable blindness in older adults. In its advanced stage, known as geographic atrophy, AMD can cause progressive, irreversible death of light-sensitive photoreceptors in the centre of the retina. This loss of photoreceptors means that light is not transduced into electrical signals, causing profound vision loss.

    The PRIMA system works by replacing these lost photoreceptors. The two-part system includes the implant itself: a 2 × 2 mm array of 378 photovoltaic pixels, plus PRIMA glasses containing a video camera that captures images and, after processing, projects them onto the implant using near-infrared light. The pixels in the implant convert this light into electrical pulses, restoring the flow of visual information to the brain. Patients can use the glasses to focus and zoom the image that they see.

    The clinical study, led by Frank Holz of the University of Bonn in Germany, enrolled 38 participants at 17 hospital sites in five European countries. All participants had geographic atrophy due to AMD in both eyes, as well as loss of central sight in the study eye over a region larger than the implant (more than 2.4 mm in diameter), leaving only limited peripheral vision.

    Around one month after surgical insertion of the 30 μm-thick PRIMA array into one eye, the patients began using the glasses. All underwent training to learn to interpret the visual signals from the implant, with their vision improving over months of training.

    Eye images before and after array implantation
    The PRIMA implant Representative fundus and OCT images obtained before and after implantation of the array in a patient’s eye. (Courtesy: Science Corporation)

    After one year, 27 of the 32 patients who completed the trial could read letters and words (with some able to read pages in a book) and 26 demonstrated clinically meaningful improvement in visual acuity (the ability to read at least two extra lines on a standard eye chart). On average, participants could read an extra five lines, with one person able to read an additional 12 lines.

    Nineteen of the participants experienced side-effects from the surgical procedure, with 95% of adverse events resolving within two months. Importantly, their peripheral vision was not impacted by PRIMA implantation. The researchers note that the infrared light used by the implant is not visible to remaining photoreceptors outside the affected region, allowing patients to combine their natural peripheral vision with the prosthetic central vision.

    “Before receiving the implant, it was like having two black discs in my eyes, with the outside distorted,” Sheila Irvine, a trial patient treated at Moorfields Eye Hospital in the UK, says in a press statement. “I was an avid bookworm, and I wanted that back. There was no pain during the operation, but you’re still aware of what’s happening. It’s a new way of looking through your eyes, and it was dead exciting when I began seeing a letter. It’s not simple, learning to read again, but the more hours I put in, the more I pick up. It’s made a big difference.”

    The PRIMA system – originally designed by Daniel Palanker at Stanford University – is being developed and manufactured by Science Corporation. Based on these latest results, reported in the New England Journal of Medicine, the company has applied for clinical use authorization in Europe and the United States.

    The post Eye implant restores vision to patients with incurable sight loss appeared first on Physics World.

    https://physicsworld.com/a/eye-implant-restores-vision-to-patients-with-incurable-sight-loss/
    Tami Freeman

    Single-phonon coupler brings different quantum technologies together

    Waveguide-style approach promises to pair the versatility of phononic quantum technologies with the tight control of photonic ones

    The post Single-phonon coupler brings different quantum technologies together appeared first on Physics World.

    Researchers in the Netherlands have demonstrated the first chip-based device capable of splitting phonons, which are quanta of mechanical vibrations. Known as a single-phonon directional coupler, or more simply as a phonon splitter, the new device could make it easier for different types of quantum technologies to “talk” to each other. For example, it could be used to transfer quantum information from spins, which offer advantages for data storage, to superconducting circuits, which may be better for data processing.

    “One of the main advantages of phonons over photons is they interact with a lot of different things,” explains team leader Simon Gröblacher of the Kavli Institute of Nanoscience at Delft University of Technology. “So it’s very easy to make them interface with systems.”

    There are, however, a few elements still missing from the phononic circuitry developer’s toolkit. One such element is a reversible beam splitter that can either combine two phonon channels (which might be carrying quantum information transferred from different media) or split one channel into two, depending on its orientation.

    While several research groups have already investigated designs for such phonon splitters, these works largely focused on surface acoustic waves. This approach has some advantages, as waves of this type have already been widely explored and exploited commercially. Mobile phones, for example, use surface acoustic waves as filters for microwave signals. The problem is that these unconfined mechanical excitations are prone to substantial losses as phonons leak into the rest of the chip.

    Mimicking photonic beam splitters

    Gröblacher and his collaborators chose instead to mimic the design of beam splitters used in photonic chips. They used a strip of thin silicon to fashion a waveguide for phonons that confined them in all dimensions but one, giving additional control and reducing loss. They then brought two waveguides into contact with each other so that one waveguide could “feel” the mechanical excitations in the other. This allowed phonon modes to be coupled between the waveguides – something the team demonstrated down to the single-phonon level. The researchers also showed they could tune the coupling between the two waveguides by altering the contact length.

    Although this is the first demonstration of single-mode phonon coupling in this kind of waveguide, the finite element method simulations Gröblacher and his colleagues ran beforehand made him pretty confident it would work from the outset. “I’m not surprised that it worked. I’m always surprised how hard it is to get it to work,” he tells Physics World. “Making it to look and do exactly what you design it to do – that’s the really hard part.”

    Prospects for integrated quantum phononics

    According to A T Charlie Johnson, a physicist at the University of Pennsylvania, US whose research focuses on this area, that hard work paid off. “These very exciting new results further advance the prospects for phonon-based qubits in quantum technology,” says Johnson, who was not directly involved in the demonstration. “Integrated quantum phononics is one significant step closer.”

    As well as switching between different quantum media, the new single-phonon coupler could also be useful for frequency shifting. For instance, microwave frequencies are close to the frequencies of ambient heat, which makes signals at these frequencies much more prone to thermal noise. Gröblacher already has a company working on transducers to transform quantum information from microwave to optical frequencies with this challenge in mind, and he says a single-phonon coupler could be handy.

    One remaining challenge to overcome is dispersion, which occurs when phonon modes couple to other unwanted modes. This is usually due to imperfections in the nanofabricated device, which are hard to avoid. However, Gröblacher also has other aspirations. “I think the one component that’s missing for us to have the similar level of control over phonons as people have with photons is a phonon phase shifter,” he tells Physics World. This, he says, would allow on-chip interferometry to route phonons to different parts of a chip, and perform advanced quantum experiments with phonons.

    The study is reported in Optica.

    The post Single-phonon coupler brings different quantum technologies together appeared first on Physics World.

    https://physicsworld.com/a/single-phonon-coupler-brings-different-quantum-technologies-together/
    Anna Demming

    This jumping roundworm uses static electricity to attach to flying insects

    The parasitic roundworm Steinernema carpocapsae can leap some 25 times its body length into the air

    The post This jumping roundworm uses static electricity to attach to flying insects appeared first on Physics World.

    Researchers in the US have discovered that a tiny jumping worm uses static electricity to increase the chances of attaching to its unsuspecting prey.

    The parasitic roundworm Steinernema carpocapsae, which live in soil, are already known to leap some 25 times their body length into the air. They do this by curling into a loop and springing in the air, rotating hundreds of times a second.

    If the nematode lands successfully, it releases bacteria that kills the insect within a couple of days upon which the worm feasts and lays its eggs. At the same time, if it fails to attach to a host then it faces death itself.

    While static electricity plays a role in how some non-parasitic nematodes detach from large insects, little is known whether static helps their parasitic counterparts to attach to an insect.

    To investigate, researchers are Emory University and the University of California, Berkeley, conducted a series of experiments, in which they used high-speed microscopy techniques to film the worms as they leapt onto a fruit fly.

    They did this by tethering a fly with a copper wire that was connected to a high-voltage power supply.

    They found that a charge of a few hundred volts – similar to that generated in the wild by an insect’s wings rubbing against ions in the air – fosters a negative charge on the worm, creating an attractive force with the positively charged fly.

    Carrying out simulations of the worm jumps, they found that without any electrostatics, only 1 in 19 worm trajectories successfully reached their target. The greater the voltage, however, the greater the chance of landing. For 880 V, for example, the probability was 80%.

    The team also carried out experiments using a wind tunnel, finding that the presence of wind helped the nematodes drift and this also increased their chances of attaching to the insect.

    “Using physics, we learned something new and interesting about an adaptive strategy in an organism,” notes Emory physicist Ranjiangshang Ran. “We’re helping to pioneer the emerging field of electrostatic ecology.”

    The post This jumping roundworm uses static electricity to attach to flying insects appeared first on Physics World.

    https://physicsworld.com/a/this-jumping-roundworm-uses-static-electricity-to-attach-to-flying-insects/
    Michael Banks

    Wearable UVA sensor warns about overexposure to sunlight

    Device is flexible and transparent to visible light

    The post Wearable UVA sensor warns about overexposure to sunlight appeared first on Physics World.

    Illustration showing the operation of the UVA detector
    Transparent healthcare Illustration of the fully transparent sensor that reacts to sunlight and allows real-time monitoring of UVA exposure on the skin. The device could be integrated into wearable items, such as glasses or patches. (Courtesy: Jnnovation Studio)

    A flexible and wearable sensor that allows the user to monitor their exposure to ultraviolet (UV) radiation has been unveiled by researchers in South Korea. Based on a heterostructure of four different oxide semiconductors, the sensor’s flexible, transparent design could vastly improve the real-time monitoring of skin health.

    UV light in the A band has wavelengths of 315–400 nm and comprises about 95% of UV radiation that reaches the surface of the earth. Because of its relatively long wavelength, UVA can penetrate deep into the skin. There it can alter biological molecules, damaging tissue and even causing cancer.

    While covering up with clothing and using sunscreen are effective at reducing UVA exposure, researchers are keen on developing wearable sensors that can monitor UVA levels in real time. These can alert users when their UVA exposure reaches a certain level. So far, the most promising advances towards these designs have come from oxide semiconductors.

    Many challenges

    “For the past two decades, these materials have been widely explored for displays and thin-film transistors because of their high mobility and optical transparency,” explains Seong Jun Kang at Kyung Hee University, who led the research. “However, their application to transparent ultraviolet photodetectors has been limited by high persistent photocurrent, poor UV–visible discrimination, and instability under sunlight.”

    While these problems can be avoided in more traditional UV sensors, such as gallium nitride and zinc oxide, these materials are opaque and rigid – making them completely unsuitable for use in wearable sensors.

    In their study, Kang’s team addressed these challenges by introducing a multi-junction heterostructure, made by stacking multiple ultrathin layers of different oxide semiconductors. The four semiconductors they selected each had wide bandgaps, which made them more transparent in the visible spectrum but responsive to UV light.

    The structure included zinc and tin oxide layers as n-type semiconductors (doped with electron-donating atoms) and cobalt and hafnium oxide layers as p-type semiconductors (doped with electron-accepting atoms) – creating positively charged holes. Within the heterostructure, this selection created three types of interface: p–n junctions between hafnium and tin oxide; n–n junctions between tin and zinc oxide; and p–p junctions between cobalt and hafnium oxide.

    Efficient transport

    When the team illuminated their heterostructure with UVA photons, the electron–hole charge separation was enhanced by the p–n junction, while the n–n and p–p junctions allowed for more efficient transport of electrons and holes respectively, improving the design’s response speed. When the illumination was removed, the electron–hole pairs could quickly decay, avoiding any false detections.

    To test their design’s performance, the researchers integrated their heterostructure into a wearable detector. “In collaboration with UVision Lab, we developed an integrated Bluetooth circuit and smartphone application, enabling real-time display of UVA intensity and warning alerts when an individual’s exposure reaches the skin-type-specific minimal erythema dose (MED),” Kang describes. “When connected to the Bluetooth circuit and smartphone application, it successfully tracked real-time UVA variations and issued alerts corresponding to MED limits for various skin types.”

    As well as maintaining over 80% transparency, the sensor proved highly stable and responsive, even in direct outdoor sunlight and across repeated exposure cycles. Based on this performance, the team is now confident that their design could push the capabilities of oxide semiconductors beyond their typical use in displays and into the fast-growing field of smart personal health monitoring.

    “The proposed architecture establishes a design principle for high-performance transparent optoelectronics, and the integrated UVA-alert system paves the way for next-generation wearable and Internet-of-things-based environmental sensors,” Kang predicts.

    The research is described in Science Advances.

    The post Wearable UVA sensor warns about overexposure to sunlight appeared first on Physics World.

    https://physicsworld.com/a/wearable-uv-sensor-warns-about-overexposure-to-sunlight/
    No Author

    Astronauts could soon benefit from dissolvable eye insert

    A solution to microgravity-related vision problems is the topic of this week's podcast

    The post Astronauts could soon benefit from dissolvable eye insert appeared first on Physics World.

    Spending time in space has a big impact on the human body and can cause a range of health issues. Many astronauts develop vision problems because microgravity causes body fluids to redistribute towards the head. This can lead to swelling in the eye and compression of the optic nerve.

    While eye conditions can generally be treated with medication, delivering drugs in space is not a straightforward task. Eye drops simply don’t work without gravity, for example. To address this problem, researchers in Hungary are developing a tiny dissolvable eye insert that could deliver medication directly to the eye. The size of a grain of rice, the insert has now been tested by an astronaut on the International Space Station.

    This episode of the Physics World Weekly podcast features two of those researchers – Diána Balogh-Weiser of Budapest University of Technology and Economics and Zoltán Nagy of Semmelweis University – who talk about their work with Physics World’s Tami Freeman.

    The post Astronauts could soon benefit from dissolvable eye insert appeared first on Physics World.

    https://physicsworld.com/a/astronauts-could-soon-benefit-from-dissolvable-eye-insert/
    Tami Freeman

    Scientists obtain detailed maps of earthquake-triggering high-pressure subsurface fluids

    Advanced seismic imaging techniques could improve earthquake early warning models and aid the development of next-generation geothermal power

    The post Scientists obtain detailed maps of earthquake-triggering high-pressure subsurface fluids appeared first on Physics World.

    Researchers in Japan and Taiwan have captured three-dimensional images of an entire geothermal system deep in the Earth’s crust for the first time. By mapping the underground distribution of phenomena such as fracture zones and phase transitions associated with seismic activity, they say their work could lead to improvements in earthquake early warning models. It could also help researchers develop next-generation versions of geothermal power – a technology that study leader Takeshi Tsuji of the University of Tokyo says has enormous potential for clean, large-scale energy production.

    “With a clear three-dimensional image of where supercritical fluids are located and how they move, we can identify promising drilling targets and design safer and more efficient development plans,” Tsuji says. “This could have direct implications for expanding geothermal power generation, reducing dependence on fossil fuels, and contributing to carbon neutrality and energy security in Japan and globally.”

    In their study, Tsuji and colleagues focused on a region known as the brittle-ductile transition zone, which is where rocks go from being seismically active to mostly inactive. This zone is important for understanding volcanic activity and geothermal processes because it lies near an impermeable sealing band that allows fluids such as water to accumulate in a high-pressure, supercritical state. When these fluids undergo phase transitions, earthquakes may follow. However, such fluids could also produce more geothermal energy than conventional systems. Identifying their location is therefore important for this reason, too.

    A high-resolution “digital map”

    Many previous electromagnetic and magnetotelluric surveys suffered from low spatial resolution and were limited to regions relatively close to the Earth’s surface. In contrast, the techniques used in the latest study enabled Tsuji and colleagues to create a clear high-resolution “digital map” of deep geothermal reservoirs – something that has never been achieved before.

    To make their map, the researchers used three-dimensional multichannel seismic surveys to image geothermal structures in the Kuju volcanic group, which is located on the Japanese island of Kyushu. They then analysed these images using a method they developed known as extended Common Reflection Surface (CRS) stacking. This allowed them to visualize deeper underground features such as magma-related structures, fracture-controlled fluid pathways and rock layers that “seal in” supercritical fluids.

    “In addition to this, we applied advanced seismic tomography and machine-learning based analyses to determine the seismic velocity of specific structures and earthquake mechanisms with high accuracy,” explains Tsuji. “It was this integrated approach that allowed us to image a deep geothermal system in unprecedented detail.” He adds that the new technique is also better suited to mountainous geothermal regions where limited road access makes it hard to deploy the seismic sources and receivers used in conventional surveys.

    A promising site for future supercritical geothermal energy production

    Tsuji and colleagues chose to study the Kuju area because it is home to several volcanoes that were active roughly 1600 years ago and have erupted intermittently in recent years. The region also hosts two major geothermal power plants, Hatchobaru and Otake. The former has a capacity of 110 MW and is the largest geothermal facility in Japan.

    The heat source for both plants is thought to be located beneath Mt Kuroiwa and Mt Sensui, and the region is considered a promising site for supercritical geothermal energy production. Its geothermal reservoir appears to consist of water that initially fell as precipitation (so-called meteoric water) and was heated underground before migrating westward through the fault system. Until now, though, no detailed images of the magmatic structures and fluid pathways had been obtained.

    Tsuji says he has long wondered why geothermal power is not more widely used in Japan, despite the country’s abundant volcanic and thermal resources. “Our results now provide the scientific and technical foundation for next-generation supercritical geothermal power,” he tells Physics World.

    The researchers now plan to try out their technique using portable seismic sources and sensors deployed in mountainous areas (not just along roads) to image the shallower parts of geothermal systems in greater detail as well. “We also plan to extend our surveys to other geothermal fields to test the general applicability of our method,” Tsuji says. “Ultimately, our goal is to provide a reliable scientific basis for the large-scale deployment of supercritical geothermal power as a sustainable energy source.”

    The present work is detailed in Communications Earth & Environment.

    The post Scientists obtain detailed maps of earthquake-triggering high-pressure subsurface fluids appeared first on Physics World.

    https://physicsworld.com/a/scientists-obtain-detailed-maps-of-earthquake-triggering-high-pressure-subsurface-fluids/
    Isabelle Dumé

    Researchers visualize blood flow in pulsating artificial heart

    Four-dimensional flow MRI reveals that blood flow in an artificial heart resembles that in a healthy human heart

    The post Researchers visualize blood flow in pulsating artificial heart appeared first on Physics World.

    A research team in Sweden has used real-time imaging technology to visualize the way that blood pumps around a pulsating artificial heart – moving medicine one step closer to the safe use of such devices in people waiting for donor transplants.

    The Linköping University (LiU) team used 4D flow MRI to examine the internal processes of a mechanical heart prototype created by Västerås-based technology company Scandinavian Real Heart. The researchers evaluated blood flow patterns and compared them with similar measurements taken in a native human heart, outlining their results in Scientific Reports.

    “As the pulsatile total artificial heart contains metal parts, like the motor, we used 3D printing [to replace most metal parts] and a physiological flow loop so we could run it in the MRI scanner under representable conditions,” says first author Twan Bakker, a PhD student at the Center for Medical Image Science and Visualization at LiU.

    No elevated risk

    According to Bakker, this is first time that a 3D-printed MRI-compatible artificial heart has been built and successfully evaluated using 4D flow MRI. The team was pleased to discover that the results corroborate the findings of previous computational fluid dynamics simulations indicating “low shear stress and low stagnation”. Overall flow patterns also suggest there is no elevated risk for blood complications compared with hearts in healthy humans and those suffering from valvular disease.

    “[The] patterns of low blood flow, a risk for thrombosis, were in the same range as for healthy native human hearts. Patterns of turbulent flow, a risk for activation of blood platelets, which can contribute to thrombosis, were lower than those found in patients with valvular disease,” says Bakker.

    “4D flow MRI allows us to measure the flow field without altering the function of the total artificial heart, which is therefore a valuable tool to complement computer simulations and blood testing during the development of the device. Our measurements provided valuable information to the design team that could improve the artificial heart prototype further,” he adds.

    Improved diagnostics

    A key advantage of 4D flow MRI over alternative measurement techniques – such as particle image velocimetry and laser doppler anemometry – is that it doesn’t require the creation of a fully transparent model. This is an important distinction for Bakker, since some components in the artificial heart are made with materials possessing unique mechanical properties, meaning that replication in a see-through version would be extremely challenging.

    Visualizing blood flow The central image shows a representation of the full cardiac cycle in the artificial heart, with circulating flow patterns in various locations highlighted at specified time points. (Courtesy: CC BY 4.0/Sci. Rep. 10.1038/s41598-025-18422-y)

    “With 4D flow MRI we had to move the motor away from the scanner bore, but the material in contact with the blood and the motion of the device remained as the original design,” says Bakker.

    According to Bakker, the velocity measurements can also be used for visualization and analysis of hemodynamic parameters, such as turbulent kinetic energy, wall shear stresses and more in the heart, as well as for larger vessels in our bodies.

    “By studying the flow dynamics in patients and healthy subjects, we can better understand its role in health and disease, which can then support improved diagnostics, interventions and surgical therapies,” he explains.

    Moving forward, Bakker says that the research team will continue to evaluate the improved heart design, which was recently granted designation as a Humanitarian Use Device (HUD) by the US Food and Drug Administration (FDA).

    “This makes it possible to apply for designation as a Humanitarian Device Exemption (HDE) – which may grant the device limited marketing rights and paves the way for the pre-clinical and clinical studies,” he says.

    “In addition, we are currently developing tools to compute blood flow using simulations. This may provide us with a deeper understanding of the mechanisms that cause the formation of thrombosis and haemolysis,” he tells Physics World.

    The post Researchers visualize blood flow in pulsating artificial heart appeared first on Physics World.

    https://physicsworld.com/a/researchers-visualize-blood-flow-in-pulsating-artificial-heart/
    No Author

    Evo CT-Linac eases access to online adaptive radiation therapy

    The Elekta Evo provides flexible options for cancer centres looking to implement adaptive radiation therapy

    The post Evo CT-Linac eases access to online adaptive radiation therapy appeared first on Physics World.

    Adaptive radiation therapy (ART) is a personalized cancer treatment in which a patient’s treatment plan can be updated throughout their radiotherapy course to account for any anatomical variations – either between fractions (offline ART) or immediately prior to dose delivery (online ART). Using high-fidelity images to enable precision tumour targeting, ART improves outcomes while reducing side effects by minimizing healthy tissue dose.

    Elekta, the company behind the Unity MR-Linac, believes that in time, all radiation treatments will incorporate ART as standard. Towards this goal, it brings its broad knowledge base from the MR-Linac to the new Elekta Evo, a next-generation CT-Linac designed to improve access to ART. Evo incorporates AI-enhanced cone-beam CT (CBCT), known as Iris, to provide high-definition imaging, while its Elekta ONE Online software automates the entire workflow, including auto-contouring, plan adaptation and end-to-end quality assurance.

    A world first

    In February of this year, Matthias Lampe and his team at the private centre DTZ Radiotherapy in Berlin, Germany became the first in the world to treat patients with online ART (delivering daily plan updates while the patient is on the treatment couch) using Evo. “To provide proper tumour control you must be sure to hit the target – for that, you need online ART,” Lampe tells Physics World.

    The team at DTZ Radiotherapy
    Initiating online ART The team at DTZ Radiotherapy in Berlin treated the first patient in the world using Evo. (Courtesy: Elekta)

    The ability to visualize and adapt to daily anatomy enables reduction of the planning target volume, increasing safety for nearby organs-at-risk (OARs). “It is highly beneficial for all treatments in the abdomen and pelvis,” says Lampe. “My patients with prostate cancer report hardly any side effects.”

    Lampe selected Evo to exploit the full flexibility of its C-arm design. He notes that for the increasingly prevalent hypofractionated treatments, a C-arm configuration is essential. “CT-based treatment planning and AI contouring opened up a new world for radiation oncologists,” he explains. “When Elekta designed Evo, they enabled this in an achievable way with an extremely reliable machine. The C-arm linac is the primary workhorse in radiotherapy, so you have the best of everything.”

    Time considerations

    While online ART can take longer than conventional treatments, Evo’s use of automation and AI limits the additional time requirement to just five minutes – increasing the overall workflow from 12 to 17 minutes and remaining within the clinic’s standard time slots.

    Patient being set up on an Elekta treatment system
    Elekta Evo Evo is a next-generation CT-Linac designed to improve access to adaptive radiotherapy. (Courtesy: Elekta)

    The workflow begins with patient positioning and CBCT imaging, with Evo’s AI-enhanced Iris imaging significantly improving image quality, crucial when performing ART. The radiation therapist then matches the cone-beam and planning CTs and performs any necessary couch shift.

    Simultaneously, Elekta ONE Online performs AI auto-contouring of OARs, which are reviewed by the physician, and the target volume is copied in. The physicist then simulates the dose distribution on the new contours, followed by a plan review. “Then you can decide whether to adapt or not,” says Lampe. “This is an outstanding feature.” The final stage is treatment delivery and online dosimetry.

    When DTZ Berlin first began clinical treatments with Evo, some of Lampe’s colleagues were apprehensive as they were attached to the conventional workflow. “But now, with CBCT providing the chance to see what will be treated, every doctor on my team has embraced the shift and wouldn’t go back,” he says.

    The first treatments were for prostate cancer, a common indication that’s relatively easy to treat. “I also thought that if the Elekta ONE workflow struggled, I could contour this on my own in a minute,” says Lampe. “But this was never necessary, the process is very solid. Now we also treat prostate cancer patients with lymph node metastases and those with relapse after radiotherapy. It’s a real success story.”

    Lampe says that older and frailer patients may benefit the most from online ART, pointing out that while published studies often include relatively young, healthy patients, “our patients are old, they have chronic heart disease, they’re short of breath”.

    For prostate cancer, for example, patients are instructed to arrive with a full bladder and an empty rectum. “But if a patient is in his eighties, he may not be able to do this and the volumes will be different every day,” Lampe explains. “With online adaptive, you can tell patients: ‘if this is not possible, we will handle it, don’t stress yourself’. They are very thankful.”

    Making ART available to all

    At UMC Utrecht in the Netherlands, the radiotherapy team has also added CT-Linac online adaptive to its clinical toolkit.

    UMC Utrecht is renowned for its development of MR-guided radiotherapy, with physicists Bas Raaymakers and Jan Lagendijk pioneering the development of a hybrid MR-Linac. “We come from the world of MR-guidance, so we know that ART makes sense,” says Raaymakers. “But if we only offer MR-guided radiotherapy, we miss out on a lot of patients. We wanted to bring it to the wider community.”

    The radiotherapy team at UMC Utrecht
    ART for all The radiotherapy team at UMC Utrecht in the Netherlands has added CT-Linac online adaptive to its clinical toolkit. (Courtesy: UMC Utrecht)

    At the time of speaking to Physics World, the team was treating its second patient with CBCT-guided ART, and had delivered about 30 fractions. Both patients were treated for bladder cancer, with future indications to explore including prostate, lung and breast cancers and bone metastases.

    “We believe in ART for all patients,” says medical physicist Anette Houweling. “If you have MR and CT, you should be able to choose the optimal treatment modality based on image quality. For below the diaphragm, this is probably MR, while for the thorax, CT might be better.”

    Ten minute target for OART

    Houweling says that ART delivery has taken 19 minutes on average. “We record the CBCT, perform image fusion and then the table is moved, that’s all standard,” she explains. “Then the adaptive part comes in: delineation on the CBCT and creating a new plan with Elekta ONE Planning as part of Elekta One Online.”

    The plan adaptation, when selected to perform, takes roughly four minutes to create a clinical-grade volumetric-modulated arc therapy (VMAT) plan. With the soon to be installed next-generation optimizer, it is expected to take less than one minute to generate a VMAT plan.

    “As you start with the regular workflow, you can still decide not to choose adaptive treatment, and do a simple couch shift, up until the last second,” says Raaymakers. It’s very close to the existing workflow, which makes adoption easier. Also, the treatment slots are comparable to standard slots. Now with CBCT it takes 19 minutes and we believe we can get towards 10. That’s one of the drivers for cone-beam adaptive.”

    Shorter treatment times will impact the decision as to which patients receive ART. If fully automated adaptive treatment is deliverable in a 10-minute time slot, it could be available to all patients. “From the physics side, our goal is to have no technological limitations to delivering ART. Then it’s up to the radiation oncologists to decide which patients might benefit,” Raaymakers explains.

    Future gazing

    Looking to the future, Raaymakers predicts that simulation-free radiotherapy will be adopted for certain standard treatments. “Why do you need days of preparation if you can condense the whole process to the moment when the patient is on the table,” he says. “That would be very much helped by online ART.”

    “Scroll forward a few years and I expect that ART will be automated and fast such that the user will just sign off the autocontours and plan in one, maybe tune a little, and then go ahead,” adds Houweling. “That will be the ultimate goal of ART. Then there’s no reason to perform radiotherapy the traditional way.”

    The post Evo CT-Linac eases access to online adaptive radiation therapy appeared first on Physics World.

    https://physicsworld.com/a/evo-ct-linac-eases-access-to-online-adaptive-radiation-therapy/
    Tami Freeman

    Jesper Grimstrup’s The Ant Mill: could his anti-string-theory rant do string theorists a favour?

    Robert P Crease examines a new example of “rant lit” from Danish theorist Jesper Grimstrup

    The post Jesper Grimstrup’s <em>The Ant Mill</em>: could his anti-string-theory rant do string theorists a favour? appeared first on Physics World.

    Imagine you had a bad breakup in college. Your ex-partner is furious and self-publishes a book that names you in its title. You’re so humiliated that you only dimly remember this ex, though the book’s details and anecdotes ring true.

    According to the book, you used to be inventive, perceptive and dashing. Then you started hanging out with the wrong crowd, and became competitive, self-involved and incapable of true friendship. Your ex struggles to turn you around; failing, they leave. The book, though, is so over-the-top that by the end you stop cringing and find it a hoot.

    That’s how I think most Physics World readers will react to The Ant Mill: How Theoretical High-energy Physics Descended into Groupthink, Tribalism and Mass Production of Research. Its author and self-publisher is the Danish mathematician-physicist Jesper Grimstrup, whose previous book was Shell Beach: the Search for the Final Theory.

    After receiving his PhD in theoretical physics at the Technical University of Vienna in 2002, Grimstrup writes, he was “one of the young rebels” embarking on “a completely unexplored area” of theoretical physics, combining elements of loop quantum gravity and noncommutative geometry. But there followed a decade of rejected articles and lack of opportunities.

    Grimstrup became “disillusioned, disheartened, and indignant” and in 2012 left the field, selling his flat in Copenhagen to finance his work. Grimstrup says he is now a “self-employed researcher and writer” who lives somewhere near the Danish capital. You can support him either through Ko-fi or Paypal.

    Fomenting fear

    The Ant Mill opens with a copy of the first page of the letter that Grimstrup’s fellow Dane Niels Bohr sent in 1917 to the University of Copenhagen successfully requesting a four-storey building for his physics institute. Grimstrup juxtaposes this incident with the rejection of his funding request, almost a century later, by the Danish Council for Independent Research.

    Today, he writes, theoretical physics faces a situation “like the one it faced at the time of Niels Bohr”, but structural and cultural factors have severely hampered it, making it impossible to pursue promising new ideas. These include Grimstrup’s own “quantum holonomy theory, which is a candidate for a fundamental theory”. The Ant Mill is his diagnosis of how this came about.

    The Standard Model of particle physics, according to Grimstrup, is dominated by influential groups that squeeze out other approaches

    A major culprit, in Grimstrup’s eyes, was the Standard Model of particle physics. That completed a structure for which theorists were trained to be architects and should have led to the flourishing of a new crop of theoretical ideas. But it had the opposite effect. The field, according to Grimstrup, is now dominated by influential groups that squeeze out other approaches.

    The biggest and most powerful is string theory, with loop quantum gravity its chief rival. Neither member of the coterie can make testable predictions, yet because they control jobs, publications and grants they intimidate young researchers and create what Grimstrup calls an “undercurrent of fear”. (I leave assessment of this claim to young theorists.)

    Roughly half the chapters begin with an anecdote in which Grimstrup describes an instance of rejection by a colleague, editor or funding agency. In the book’s longest chapter Grimstrup talks about his various rejections – by the Carlsberg Foundation, The European Physics Journal C, International Journal of Modern Physics A, Classical and Quantum Gravity, Reports on Mathematical Physics, Journal of Geometry and Physics and the Journal of Noncommutative Geometry.

    Grimstrup says that the reviewers and editors of these journals told him that his papers variously lacked concrete physical results, were exercises in mathematics, seemed the same as other papers, or lacked “relevance and significance”. Grimstrup sees this as the coterie’s handiwork, for such journals are full of string theory papers open to the same criticism.

    “Science is many things,” Grimstrup writes at the end. “[S]imultaneously boring and scary, it is both Indiana Jones and anonymous bureaucrats, and it is precisely this diversity that is missing in the modern version of science.” What the field needs is “courage…hunger…ambition…unwillingness to compromise…anarchy”.

    Grimstrup hopes that his book will have an impact, helping to inspire young researchers to revolt, and to make all the scientific bureaucrats and apparatchiks and bookkeepers and accountants “wake up and remember who they truly are”.

    The critical point

    The Ant Mill is an example of what I have called “rant literature” or rant-lit. Evangelical, convinced that exposing truth will make sinners come to their senses and change their evil ways, rant lit can be fun to read, for it is passionate and full of florid metaphors.

    Theoretical physicists, Grimstrup writes, have become “obedient idiots” and “technicians” (the phrase appearing in an e-mail cited in the book that was written by an unidentified person with whom the author disagrees). Theoretical physics, he suggests, has become a “kingdom”, a “cult”, a “hamster wheel” and “ant mill”, in which the ants march around in a pre-programmed “death spiral”.

    Grimstrup hammers away at theories lacking falsifiability, but his vehemence invites you to ask: “Is falsifiability really the sole criterion for deciding whether to accept or fail to pursue a theory?”

    An attentive reader, however, may come away with a different lesson. Grimstrup calls falsifiability the “crown jewel of the natural sciences” and hammers away at theories lacking it. But his vehemence invites you to ask: “Is falsifiability really the sole criterion for deciding whether to accept or fail to pursue a theory?”

    In his 2013 book String Theory and the Scientific Method, for instance, the Stockholm University philosopher of science Richard Dawid suggested rescuing the scientific status of string theory by adding such non-empirical criteria to evaluating theories as clarity, coherence and lack of alternatives. It’s an approach that both rescues the formalistic approach to the scientific method and undermines it.

    Dawid, you see, is making the formalism follow the practice rather than the other way around. In other words, he is able to reformulate how we make theories because he already knows how theorizing works – not because he only truly knows what it is to theorize after he gets the formalism right.

    Grimstrup’s rant, too, might remind you of the birth of the Yang–Mills theory in 1954. Developed by Chen Ning Yang and Robert Mills, it was a theory of nuclear binding that integrated much of what was known about elementary particle theory but implied the existence of massless force-carrying particles that then were known not to exist. In fact, at one seminar Wolfgang Pauli unleashed a tirade against Yang for proposing so obviously flawed a theory.

    The theory, however, became central to theoretical physics two decades later, after theorists learned more about the structure of the world. The Yang–Mills story, in other words, reveals that theory-making does not always conform to formal strictures and does not always require a testable prediction. Sometimes it just articulates the best way to make sense of the world apart from proof or evidence.

    The lesson I draw is that becoming the target of a rant might not always make you feel repentant and ashamed. It might inspire you into deep reflection on who you are in a way that is insightful and vindicating. It might even make you more rather than less confident about why you’re doing what you’re doing

    Your ex, of course, would be horrified.

    • Jesper Grimstrup published an entry on his blog, dated 1 November 2025, expressing his views on this review.

    The post Jesper Grimstrup’s <em>The Ant Mill</em>: could his anti-string-theory rant do string theorists a favour? appeared first on Physics World.

    https://physicsworld.com/a/jesper-grimstrups-the-ant-mill-could-his-anti-string-theory-rant-do-string-theorists-a-favour/
    Robert P Crease

    Further evidence for evolving dark energy?

    The term dark energy, first used in 1998, is a proposed form of energy that affects the universe on the largest scales. Its primary effect is to drive the accelerating expansion of the universe – an observation that was awarded the 2011 Nobel Prize in Physics. Dark energy is now a well established concept and […]

    The post Further evidence for evolving dark energy? appeared first on Physics World.

    The term dark energy, first used in 1998, is a proposed form of energy that affects the universe on the largest scales. Its primary effect is to drive the accelerating expansion of the universe – an observation that was awarded the 2011 Nobel Prize in Physics.

    Dark energy is now a well established concept and forms a key part of the standard model of Big Bang cosmology, the Lambda-CDM model.

    The trouble is, we’ve never really been able to explain exactly what dark energy is, or why it has the value that it does.

    Even worse, new data acquired by cutting-edge telescopes have suggested that dark energy might not even exist as we had imagined it.

    This is where the new work by Mukherjee and Sen comes in. They combined two of these datasets, while making as few assumptions as possible, to understand what’s going on.

    The first of these datasets came from baryon acoustic oscillations. These are patterns in the distribution of matter in the universe, created by sound waves in the early universe.

    The second dataset is based on a survey of supernovae data from the last 5 years. Both sets of data can be used to track the expansion history of the universe by measuring distances at different snapshots in time.

    The team’s results are in tension with the Lambda-CDM model at low redshifts. Put simply, the results disagree with the current model at recent times. This provides further evidence for the idea that dark energy, previously considered to have a constant value, is evolving over time.

    Evolving dark energy
    The tension in the expansion rate is most evident at low redshifts (Courtesy: P. Mukherjee)

    The is far from the end of the story with dark energy. New observational data, and new analyses such as this one are urgently required to provide a clearer picture.

    However, where there’s uncertainty, there’s opportunity. Understanding dark energy could hold the key to understanding quantum gravity, the Big Bang and the ultimate fate of the universe.

    Read the full article

    New expansion rate anomalies at characteristic redshifts geometrically determined using DESI-DR2 BAO and DES-SN5YR observations – IOPscience

    Mukherjee and Sen, 2025 Rep. Prog. Phys. 88 098401

     

     

     

    The post Further evidence for evolving dark energy? appeared first on Physics World.

    https://physicsworld.com/a/further-evidence-for-evolving-dark-energy/
    Paul Mabey

    Searching for dark matter particles

    A research team from China and Denmark have proposed a new, far more efficient, method of detecting ultralight dark matter particles in the lab

    The post Searching for dark matter particles appeared first on Physics World.

    Dark matter is hypothesised form of matter that does not emit, absorb, or reflect light, making it invisible to electromagnetic observations. Although we have never detected it, its existence is inferred from its gravitational effects on visible matter and the large-scale structure of the universe.

    The Standard Model of particle physics does not contain any dark matter particles but there have been several proposed extensions of how they might be included. Several of these are very low mass particles such as the axion or the sterile neutrino.

    Detecting these hypothesised particles is very challenging, however, due to the extreme sensitivity required.

    Electromagnetic resonant systems, such as cavities and LC circuits, are widely used for this purpose, as well as to detect high-frequency gravitational waves.

    When an external signal matches one of these systems’ resonant frequencies, the system responds with a large amplitude, making the signal possible to detect. However, there is always a trade-off between the sensitivity of the detector and the range of frequencies it is able to detect (its bandwidth).

    A natural way to overcome this compromise is to consider multi-mode resonators, which can be viewed as coupled networks of harmonic oscillators. Their scan efficiency can be significantly enhanced beyond the standard quantum limit of simple single-mode resonators.

    In a recent paper, the researchers demonstrated how multi-mode resonators can achieve the advantages of both sensitive and broadband detection. By connecting adjacent modes inside the resonant cavity, and  tuning these interactions to comparable magnitudes, off-resonant (i.e. unwanted) frequency shifts are effectively cancelled increasing the overall response of the system.

    Their method allows us to search for these elusive dark matter particles in a faster, more efficient way.

    Dark matter detection circuit
    A multi-mode detector design, where the first mode couples to dark matter and the last mode is read out (Courtesy: Y. Chen)

    Read the full article

    Simultaneous resonant and broadband detection of ultralight dark matter and high-frequency gravitational waves via cavities and circuits – IOPscience

    Chen et al. 2025 Rep. Prog. Phys. 88 057601

    The post Searching for dark matter particles appeared first on Physics World.

    https://physicsworld.com/a/searching-for-dark-matter-particles/
    Paul Mabey

    Physicists explain why some fast-moving droplets stick to hydrophobic surfaces

    New experiments and calculations could improve aerosol and microfluidic technologies while shedding more light on airborne disease transmission

    The post Physicists explain why some fast-moving droplets stick to hydrophobic surfaces appeared first on Physics World.

    What happens when a microscopic drop of water lands on a water-repelling surface? The answer is important for many everyday situations, including pesticides being sprayed on crops and the spread of disease-causing aerosols. Naively, one might expect it to depend on the droplet’s speed, with faster-moving droplets bouncing off the surface and slower ones sticking to it. However, according to new experiments, theoretical work and simulations by researchers in the UK and the Netherlands, it’s more complicated than that.

    “If the droplet moves too slowly, it sticks,” explains Jamie McLauchlan, a PhD student at the University of Bath, UK who led the new research effort with Bath’s Adam Squires and Anton Souslov of the University of Cambridge. “Too fast, and it sticks again. Only in between is bouncing possible, where there is enough momentum to detach from the surface but not so much that it collapses back onto it.”

    As well as this new velocity-dependent condition, the researchers also discovered a size effect in which droplets that are too small cannot bounce, no matter what their speed. This size limit, they say, is set by the droplets’ viscosity, which prevents the tiniest droplets from leaving the surface once they land on it.

    Smaller-sized, faster-moving droplets

    While academic researchers and industrialists have long studied single-droplet impacts, McLauchlan says that much of this earlier work focused on millimetre-sized drops that took place on millisecond timescales. “We wanted to push this knowledge to smaller sizes of micrometre droplets and faster speeds, where higher surface-to-volume ratios make interfacial effects critical,” he says. “We were motivated even further during the COVID-19 pandemic, when studying how small airborne respiratory droplets interact with surfaces became a significant concern.”

    Working at such small sizes and fast timescales is no easy task, however. To record the outcome of each droplet landing, McLauchlan and colleagues needed a high-speed camera that effectively slowed down motion by a factor of 100 000. To produce the droplets, they needed piezoelectric droplet generators capable of dispensing fluid via tiny 30-micron nozzles. “These dispensers are highly temperamental,” McLauchlan notes. “They can become blocked easily by dust and fibres and fail to work if the fluid viscosity is too high, making experiments delicate to plan and run. The generators are also easy to break and expensive.”

    Droplet modelled as a tiny spring

    The researchers used this experimental set-up to create and image droplets between 30‒50 µm in diameter as they struck water-repelling surfaces at speeds of 1‒10 m/s. They then compared their findings with calculations based on a simple mathematical model that treats a droplet like a tiny spring, taking into account three main parameters in addition to its speed: the stickiness of the surface; the viscosity of the droplet liquid; and the droplet’s surface tension.

    Previous research had shown that on perfectly non-wetting surfaces, bouncing does not depend on velocity. Other studies showed that on very smooth surfaces, droplets can bounce on a thin air layer. “Our work has explored a broader range of hydrophobic surfaces, showing that bouncing occurs due to a delicate balance of kinetic energy, viscous dissipation and interfacial energies,” McLauchlan tells Physics World.

    This is exciting, he adds, because it reveals a previously unexplored regime for bounce behaviour: droplets that are too small, or too slow, will always stick, while sufficiently fast droplets can rebound. “This finding provides a general framework that explains bouncing at the micron scale, which is directly relevant for aerosol science,” he says.

    A novel framework for engineering microdroplet processes

    McLauchlan thinks that by linking bouncing to droplet velocity, size and surface properties, the new framework could make it easier to engineer microdroplets for specific purposes. “In agriculture, for example, understanding how spray velocities interact with plant surfaces with different hydrophobicity could help determine when droplets deposit fully versus when they bounce away, improving the efficiency of crop spraying,” he says.

    Such a framework could also be beneficial in the study of airborne diseases, since exhaled droplets frequently bump into surfaces while floating around indoors. While droplets that stick are removed from the air, and can no longer transmit disease via that route, those that bounce are not. Quantifying these processes in typical indoor environments will provide better models of airborne pathogen concentrations and therefore disease spread, McLauchlan says. For example, in healthcare settings, coatings could be designed to inhibit or promote bouncing, ensuring that high-velocity respiratory droplets from sneezes either stick to hospital surfaces or recoil from them, depending on which mode of potential transmission (airborne or contact-based) is being targeted.

    The researchers now plan to expand their work on aqueous droplets to droplets with more complex soft-matter properties. “This will include adding surfactants, which introduce time-dependent surface tensions, and polymers, which give droplets viscoelastic properties similar to those found in biological fluids,” McLauchlan reveals. “These studies will present significant experimental challenges, but we hope they broaden the relevance of our findings to an even wider range of fields.”

    The present work is detailed in PNAS.

    The post Physicists explain why some fast-moving droplets stick to hydrophobic surfaces appeared first on Physics World.

    https://physicsworld.com/a/physicists-explain-why-some-fast-moving-droplets-stick-to-hydrophobic-surfaces/
    Isabelle Dumé

    Quantum computing on the verge: a look at the quantum marketplace of today

    Philip Ball dives into the latest developments in the quantum-computing industry

    The post Quantum computing on the verge: a look at the quantum marketplace of today appeared first on Physics World.

    “I’d be amazed if quantum computing produces anything technologically useful in ten years, twenty years, even longer.” So wrote University of Oxford physicist David Deutsch – often considered the father of the theory of quantum computing – in 2004. But, as he added in a caveat, “I’ve been amazed before.”

    We don’t know how amazed Deutsch, a pioneer of quantum computing, would have been had he attended a meeting at the Royal Society in London in February on “the future of quantum information”. But it was tempting to conclude from the event that quantum computing has now well and truly arrived, with working machines that harness quantum mechanics to perform computations being commercially produced and shipped to clients. Serving as the UK launch of the International Year of Quantum Science and Technology (IYQ) 2025, it brought together some of the key figures of the field to spend two days discussing quantum computing as something like a mature industry, even if one in its early days.

    Werner Heisenberg – who worked out the first proper theory of quantum mechanics 100 years ago – would surely have been amazed to find that the formalism he and his peers developed to understand the fundamental behaviour of tiny particles had generated new ways of manipulating information to solve real-world problems in computation. So far, quantum computing – which exploits phenomena such as superposition and entanglement to potentially achieve greater computational power than the best classical computers can muster – hasn’t tackled any practical problems that can’t be solved classically.

    Although the fundamental quantum principles are well-established and proven to work, there remain many hurdles that quantum information technologies have to clear before this industry can routinely deliver resources with transformative capabilities. But many researchers think that moment of “practical quantum advantage” is fast approaching, and an entire industry is readying itself for that day.

    Entangled marketplace

    So what are the current capabilities and near-term prospects for quantum computing?

    The first thing to acknowledge is that a booming quantum-computing market exists. Devices are being produced for commercial use by a number of tech firms, from the likes of IBM, Google, D-Wave, and Rigetti who have been in the field for a decade or more; to relative newcomers like Nord Quantique (Canada), IQM (Finland), Quantinuum (UK and US), Orca (UK) and PsiQuantum (US), Silicon Quantum Computing (Australia), see box below, “The global quantum ecosystem”.

    The global quantum ecosystem

    Map showing the investments globally into quantum computing
    (Courtesy: QURECA)

    We are on the cusp of a second quantum revolution, with quantum science and technologies growing rapidly across the globe. This includes quantum computers; quantum sensing (ultra-high precision clocks, sensors for medical diagnostics); as well as quantum communications (a quantum internet). Indeed, according to the State of Quantum 2024 report, a total of 33 countries around the world currently have government initiatives in quantum technology, of which more than 20 have national strategies with large-scale funding. As of this year, worldwide investments in quantum tech – by governments and industry – exceed $55.7 billion, and the market is projected to reach $106 billion by 2040. With the multitude of ground-breaking capabilities that quantum technologies bring globally, it’s unsurprising that governments all over the world are eager to invest in the industry.

    With data from a number of international reports and studies, quantum education and skills firm QURECA has summarized key programmes and efforts around the world. These include total government funding spent through 2025, as well as future commitments spanning 2–10 year programmes, varying by country. These initiatives generally represent government agencies’ funding announcements, related to their countries’ advancements in quantum technologies, excluding any private investments and revenues.

    A supply chain is also organically developing, which includes manufacturers of specific hardware components, such as Oxford Instruments and Quantum Machines and software developers like Riverlane, based in Cambridge, UK, and QC Ware in Palo Alto, California. Supplying the last link in this chain are a range of eager end-users, from finance companies such as J P Morgan and Goldman Sachs to pharmaceutical companies such as AstraZeneca and engineering firms like Airbus. Quantum computing is already big business, with around 400 active companies and current global investment estimated at around $2 billion.

    But the immediate future of all this buzz is hard to assess. When the chief executive of computer giant Nvidia announced at the start of 2025 that “truly useful” quantum computers were still two decades away, the previously burgeoning share prices of some leading quantum-computing companies plummeted. They have since recovered somewhat, but such volatility reflects the fact that quantum computing has yet to prove its commercial worth.

    The field is still new and firms need to manage expectations and avoid hype while also promoting an optimistic enough picture to keep investment flowing in. “Really amazing breakthroughs are being made,” says physicist Winfried Hensinger of the University of Sussex, “but we need to get away from the expectancy that [truly useful] quantum computers will be available tomorrow.”

    The current state of play is often called the “noisy intermediate-scale quantum” (NISQ) era. That’s because the “noisy” quantum bits (qubits) in today’s devices are prone to errors for which no general and simple correction process exists. Current quantum computers can’t therefore carry out practically useful computations that could not be done on classical high-performance computing (HPC) machines. It’s not just a matter of better engineering either; the basic science is far from done.

    IBM quantum computer cryogenic chandelier
    Building up Quantum computing behemoth IBM says that by 2029, its fault-tolerant system should accurately run 100 million gates on 200 logical qubits, thereby truly achieving quantum advantage. (Courtesy: IBM)

    “We are right on the cusp of scientific quantum advantage – solving certain scientific problems better than the world’s best classical methods can,” says Ashley Montanaro, a physicist at the University of Bristol who co-founded the quantum software company Phasecraft. “But we haven’t yet got to the stage of practical quantum advantage, where quantum computers solve commercially important and practically relevant problems such as discovering the next lithium-ion battery.” It’s no longer if or how, but when that will happen.

    Pick your platform

    As the quantum-computing business is such an emerging area, today’s devices use wildly different types of physical systems for their qubits, see the box below, “Comparing computing modalities: from qubits to architectures”

    . There is still no clear sign as to which of these platforms, if any, will emerge as the winner. Indeed many researchers believe that no single qubit type will ever dominate.

    The top-performing quantum computers, like those made by Google (with its 105-qubit Willow chip) and IBM (which has made the 121-qubit Condor), use qubits in which information is encoded in the wavefunction of a superconducting material. Until recently, the strongest competing platform seemed to be trapped ions, where the qubits are individual ions held in electromagnetic traps – a technology being developed into working devices by the US company IonQ, spun out from the University of Maryland, among others.

    Comparing computing modalities: from qubits to architectures

    Table listing out the different types of qubit, the advantages of each and which company uses which qubit
    (Courtesy: PatentVest)

    Much like classical computers, quantum computers have a core processor and a control stack – the difference being that the core depends on the type of qubit being used. Currently, quantum computing is not based on a single platform, but rather a set of competing hardware approaches, each with its own physical basis for creating and controlling qubits and keeping them stable.

    The data above –  taken from the August 2025 report Quantum Computing at an Inflection Point: Who’s Leading, What They Own, and Why IP Decides Quantum’s Future by US firm Patentvest – shows the key “quantum modalities”, which refers to the different types of qubits and architectures used to build these quantum systems. Differing qubits each have their own pros and cons, with varying factors including the temperature at which they operate, coherence time, gate speed, and how easy they might be to scale up.

    But over the past few years, neutral trapped atoms have emerged as a major contender, thanks to advances in controlling the positions and states of these qubits. Here the atoms are prepared in highly excited electronic states called Rydberg atoms, which can be entangled with one another over a few microns. A Harvard startup called QuEra is developing this technology, as is the French start-up Pasqal. In September a team from the California Institute of Technology announced a 6100-qubit array made from neutral atoms. “Ten years ago I would not have included [neutral-atom] methods if I were hedging bets on the future of quantum computing,” says Deutsch’s Oxford colleague, the quantum information theorist Andrew Steane. But like many, he thinks differently now.

    Some researchers believe that optical quantum computing, using photons as qubits, will also be an important platform. One advantage here is that there is no need for complex conversion of photonic signals in existing telecommunications networks going to or from the processing units, which is also handy for photonic interconnections between chips. What’s more, photonic circuits can work at room temperature, whereas trapped ions and superconducting qubits need to be cooled. Photonic quantum computing is being developed by firms like PsiQuantum, Orca, and Xanadu.

    Other efforts, for example at Intel and Silicon Quantum Computing in Australia, make qubits from either quantum dots (Intel) or precision-placed phosphorus atoms (SQC), both in good old silicon, which benefits from a very mature manufacturing base. “Small qubits based on ions and atoms yield the highest quality processors”, says Michelle Simmons of the University of New South Wales, who is the founder and CEO of SQC. “But only atom-based systems in silicon combine this quality with manufacturability.”

    Intel's silicon spin qubits are now being manufactured on an industrial scale
    Spinning around Intel’s silicon spin qubits are now being manufactured on an industrial scale. (Courtesy: Intel Corporation)

    And it’s not impossible that entirely new quantum computing platforms might yet arrive. At the start of 2025, researchers at Microsoft’s laboratories in Washington State caused a stir when they announced that they had made topological qubits from semiconducting and superconducting devices, which are less error-prone than those currently in use. The announcement left some scientists disgruntled because it was not accompanied by a peer-reviewed paper providing the evidence for these long-sought entities. But in any event, most researchers think it would take a decade or more for topological quantum computing to catch up with the platforms already out there.

    Each of these quantum technologies has its own strengths and weaknesses. “My personal view is that there will not be a single architecture that ‘wins’, certainly not in the foreseeable future,” says Michael Cuthbert, founding director of the UK’s National Quantum Computing Centre (NQCC), which aims to facilitate the transition of quantum computing from basic research to an industrial concern. Cuthbert thinks the best platform will differ for different types of computation: cold neutral atoms might be good for quantum simulations of molecules, materials and exotic quantum states, say, while superconducting and trapped-ion qubits might be best for problems involving machine learning or optimization.

    Measures and metrics

    Given these pros and cons of different hardware platforms, one difficulty in assessing their merits is finding meaningful metrics for making comparisons. Should we be comparing error rates, coherence times (basically how long qubits remain entangled), gate speeds (how fast a single computational step can be conducted), circuit depth (how many steps a single computation can sustain), number of qubits in a processor, or what? “The metrics and measures that have been put forward so far tend to suit one or other platform more than others,” says Cuthbert, “such that it becomes almost a marketing exercise rather than a scientific benchmarking exercise as to which quantum computer is better.”

    The NQCC evaluates the performance of devices using a factor known as the “quantum operation” (QuOp). This is simply the number of quantum operations that can be carried out in a single computation, before the qubits lose their coherence and the computation dissolves into noise. “If you want to run a computation, the number of coherent operations you can run consecutively is an objective measure,” Cuthbert says. If we want to get beyond the NISQ era, he adds, “we need to progress to the point where we can do about a million coherent operations in a single computation. We’re now at the level of maybe a few thousand. So we’ve got a long way to go before we can run large-scale computations.”

    One important issue is how amenable the platforms are to making larger quantum circuits. Cuthbert contrasts the issue of scaling up – putting more qubits on a chip – with “scaling out”, whereby chips of a given size are linked in modular fashion. Many researchers think it unlikely that individual quantum chips will have millions of qubits like the silicon chips of today’s machines. Rather, they will be modular arrays of relatively small chips linked at their edges by quantum interconnects.

    Having made the Condor, IBM now plans to focus on modular architectures (scaling out) – a necessity anyway, since superconducting qubits are micron-sized, so a chip with millions of them would be “bigger than your dining room table”, says Cuthbert. But superconducting qubits are not easy to scale out because microwave frequencies that control and read out the qubits have to be converted into optical frequencies for photonic interconnects. Cold atoms are easier to scale up, as the qubits are small, while photonic quantum computing is easiest to scale out because it already speaks the same language as the interconnects.

    To be able to build up so called “fault tolerant” quantum computers, quantum platforms must solve the issue of error correction, which will enable more extensive computations without the results becoming degraded into mere noise.

    In part two of this feature, we will explore how this is being achieved and meet the various firms developing quantum software. We will also look into the potential high-value commercial uses for robust quantum computers – once such devices exist.

    • This article was updated with additional content on 22 October 2025.

    This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

    Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

    Find out more on our quantum channel.

    The post Quantum computing on the verge: a look at the quantum marketplace of today appeared first on Physics World.

    https://physicsworld.com/a/quantum-computing-on-the-verge-a-look-at-the-quantum-marketplace-of-today/
    No Author

    Physicists achieve first entangled measurement of W states

    Breakthrough in Japan could pave the way for robust quantum communication and scalable networks

    The post Physicists achieve first entangled measurement of W states appeared first on Physics World.

    Imagine two particles so interconnected that measuring one immediately reveals information about the other, even if the particles are light–years apart. This phenomenon, known as quantum entanglement, is the foundation of a variety of technologies such as quantum cryptography and quantum computing. However, entangled states are notoriously difficult to control. Now, for the first time, a team of physicists in Japan has performed a collective quantum measurement on a W state comprising three entangled photons. This allowed them to analyse the three entangled photons at once rather than one at a time. This achievement, reported in Science Advances, marks a significant step towards the practical development of quantum technologies.

    Physicists usually measure entangled particles using a technique known as quantum tomography. In this method, many identical copies of a particle are prepared, and each copy is measured at a different angle. The results of these measurements are then combined to reconstruct its full quantum state. To visualize this, imagine being asked to take a family photo. Instead of taking one group picture, you have to photograph each family member individually and then combine all the photos into a single portrait. Now imagine taking a photo properly: taking one photograph of the entire family. This is essentially what happens in an entangled measurement: where all particles are measured simultaneously rather than separately. This approach allows for significantly faster and more efficient measurements.

    So far, for three-particle systems, entangled measurements have only been performed on Greenberger–Horne–Zeilinger (GHZ) states, where all qubits (quantum bits of a system) are either in one state or another. Until now, no one had carried out an entangled measurement for a more complicated set of states known as W states, which do not share this all-or-nothing property. In their experiment, the researchers at Kyoto University and Hiroshima University specifically used the simplest type of W state, made up of three photons, where each photon’s polarization (horizontal or vertical) is represented by one qubit.

    “In a GHZ state, if you measure one qubit, the whole superposition collapses. But in a W state, even if you measure one particle, entanglement still remains,” explains Shigeki Takeuchi, corresponding author of the paper describing the study. This robustness makes the W state particularly appealing for quantum technologies.

    Fourier transformations

    The team took advantage of the fact that different W states look almost identical but differ by tiny phase shift, which acts as a hidden label that distinguishes one state from another. Using a tool called a discrete Fourier transform (DFT) circuit, researchers were able to “decode” this phase and tell the states apart.

    The DFT exploits a special type of symmetry inherent to W states. Since the method relies on symmetry, in principle it can be extended to systems containing any number of photons. The researchers prepared photons in controlled polarization states and ran them through the DFT, which provided each state’s phase label. After, the photons were sent through polarizing beam splitters that separate them into vertically and horizontally polarized groups. By counting both sets of photons, and combining this with information from the DFT, the team could identify the W state.

    The experiment identified the correct W state about 87% of the time, well above the 15% success rate typically achieved using tomography-based measurements. Maintaining this level of performance was a challenge, as tiny fluctuations in optical paths or photon loss can easily destroy the fragile interference pattern. The fact that the team could maintain stable performance long enough to collect statistically reliable data marks an important technical milestone.

    Scalable to larger systems

    “Our device is not just a single-shot measurement: it works with 100% efficiency,” Takeuchi adds. “Most linear optical protocols are probabilistic, but here the success probability is unity.” Although demonstrated with three photons, this procedure is directly scalable to larger systems, as the key insight is the symmetry that the DFT can detect.

    “In terms of applications, quantum communication seems the most promising,” says Takeuchi. “Because our device is highly efficient, our protocol could be used for robust communication between quantum computer chips. The next step is to build all of this on a tiny photonic chip, which would reduce errors and photon loss and help make this technology practical for real quantum computers and communication networks.”

    The post Physicists achieve first entangled measurement of W states appeared first on Physics World.

    https://physicsworld.com/a/physicists-achieve-first-entangled-measurement-of-w-states/
    Mira Varma

    Physicists apply quantum squeezing to a nanoparticle for the first time

    Demonstration could shed light on the nature of the classical-quantum transition for small objects

    The post Physicists apply quantum squeezing to a nanoparticle for the first time appeared first on Physics World.

    Physicists at the University of Tokyo, Japan have performed quantum mechanical squeezing on a nanoparticle for the first time. The feat, which they achieved by levitating the particle and rapidly varying the frequency at which it oscillates, could allow us to better understand how very small particles transition between classical and quantum behaviours. It could also lead to improvements in quantum sensors.

    Oscillating objects that are smaller than a few microns in diameter have applications in many areas of quantum technology. These include optical clocks and superconducting devices as well as quantum sensors. Such objects are small enough to be affected by Heisenberg’s uncertainty principle, which places a limit on how precisely we can simultaneously measure the position and momentum of a quantum object. More specifically, the product of the measurement uncertainties in the position and momentum of such an object must be greater than or equal to ħ/2, where ħ is the reduced Planck constant.

    In these circumstances, the only way to decrease the uncertainty in one variable – for example, the position – is to boost the uncertainty in the other. This process has no classical equivalent and is called squeezing because reducing uncertainty along one axis of position-momentum space creates a “bulge” in the other, like squeezing a balloon.

    A charge-neutral nanoparticle levitated in an optical lattice

    In the new work, which is detailed in Science, a team led by Kiyotaka Aikawa studied a single, charge-neutral nanoparticle levitating in a periodic intensity pattern formed by the interference of criss-crossed laser beams. Such patterns are known as optical lattices, and they are ideal for testing the quantum mechanical behaviour of small-scale objects because they can levitate the object. This keeps it isolated from other particles and allows it to sustain its fragile quantum state.

    After levitating the particle and cooling it to its motional ground state, the team rapidly varied the intensity of the lattice laser. This had the effect of changing the particle’s oscillation frequency, which in turn changed the uncertainty in its momentum. To measure this change (and prove they had demonstrated quantum squeezing), the researchers then released the nanoparticle from the trap and let it propagate for a short time before measuring its velocity. By repeating these time-of-flight measurements many times, they were able to obtain the particle’s velocity distribution.

    The telltale sign of quantum squeezing, the physicists say, is that the velocity distribution they measured for the nanoparticle was narrower than the uncertainty in velocity for the nanoparticle at its lowest energy level. Indeed, the measured velocity variance was narrower than that of the ground state by 4.9 dB, which they say is comparable to the largest mechanical quantum squeezing obtained thus far.

    “Our system will enable us to realize further exotic quantum states of motions and to elucidate how quantum mechanics should behave at macroscopic scales and become classical,” Aikawa tells Physics World. “This could allow us to develop new kinds of quantum devices in the future.”

    The post Physicists apply quantum squeezing to a nanoparticle for the first time appeared first on Physics World.

    https://physicsworld.com/a/physicists-apply-quantum-squeezing-to-a-nanoparticle-for-the-first-time/
    Isabelle Dumé

    Theoretical physicist Michael Berry wins 2025 Isaac Newton Medal and Prize

    Berry recognized for his contributions in mathematics and theoretical physics over a 60-year career

    The post Theoretical physicist Michael Berry wins 2025 Isaac Newton Medal and Prize appeared first on Physics World.

    Michael Berry
    Quantum pioneer: Michael Berry is best known for his work in the 1980s on the Berry Phase. (Courtesy: Michael Berry)

    The theoretical physicist Michael Berry from the University of Bristol has won the 2025 Isaac Newton Medal and Prize for his “profound contributions across mathematical and theoretical physics in a career spanning over 60 years”. Presented by the Institute of Physics (IOP), which publishes Physics World, the international award is given annually for “world-leading contributions to physics by an individual of any nationality”.

    Born in 1941 in Surrey, UK, Berry earned a BSc in physics from the University of Exeter in 1962 and a PhD from the University of St Andrews in 1965. He then moved to Bristol, where he has remained for the rest of his career.

    Berry is best known for his work in the 1980s in which he showed that, under certain conditions, quantum systems can acquire what is known as a geometric phase. He was studying quantum systems in which the Hamiltonian describing the system is slowly changed so that it eventually returns to its initial form.

    Berry showed that the adiabatic theorem widely used to describe such systems was incomplete and that a system acquires a phase factor that depends on the path followed, but not on the rate at which the Hamiltonian is changed. This geometric phase factor is now known as the Berry phase.

    Over his career Berry, has written some 500 papers across a wide number of topics. In physics, Berry’s ideas have applications in condensed matter, quantum information and high-energy physics, as well as optics, nonlinear dynamics, and atomic and molecular physics. In mathematics, meanwhile, his work forms the basis for research in analysis, geometry and number theory.

    Berry told Physics World that the award is “unexpected recognition for six decades of obsessive scribbling…creating physics by seeking ‘claritons’ – elementary particles of sudden understanding – and evading ‘anticlaritons’ that annihilate them” as well as “getting insights into nature’s physics” such as studying tidal bores, tsunamis, rainbows and “polarised light in the blue sky”.

    Over the years, Berry has won a wide number of other honours, including the IOP’s Dirac Medal and the Royal Medal from the Royal Society, both awarded in 1990. He was also given the Wolf Prize for Physics in 1998 and the 2014 Lorentz Medal from the Royal Netherlands Academy of Arts and Sciences. In 1996 he received a knighthood for his services to science.

    Berry will also be a speaker at the IOP’s International Year of Quantum celebrations on 4 November.

    Celebrating success

    Berry’s latest honour forms part of the IOP’s wider 2025 awards, which recognize everyone from early-career scientists and teachers to technicians and subject specialists. Other winners include Julia Yeomans, who receives the Dirac Medal and Prize for her work highlighting the relevance of active physics to living matter.

    Lok Yiu Wu, meanwhile, receives Jocelyn Bell Burnell Medal and Prize for her work on the development of a novel magnetic radical filter device, and for ongoing support of women and underrepresented groups in physics.

    In a statement, IOP president Michele Dougherty congratulated all the winners. “It is becoming more obvious that the opportunities generated by a career in physics are many and varied – and the potential our science has to transform our society and economy in the modern world is huge,” says Dougherty. “I hope our winners appreciate they are playing an important role in this community, and know how proud we are to celebrate their successes.”

    The full list of 2025 award winners is available here.

    The post Theoretical physicist Michael Berry wins 2025 Isaac Newton Medal and Prize appeared first on Physics World.

    https://physicsworld.com/a/theoretical-physicist-michael-berry-wins-2025-isaac-newton-medal-and-prize/
    Michael Banks

    Phase shift in optical cavities could detect low-frequency gravitational waves

    Global network could pinpoint astronomical sources

    The post Phase shift in optical cavities could detect low-frequency gravitational waves appeared first on Physics World.

    A network of optical cavities could be used to detect gravitational waves (GWs) in an unexplored range of frequencies, according to researchers in the UK. Using technology already within reach, the team believes that astronomers could soon be searching for ripples in space–time across the milli-Hz frequency band at 10⁻⁵ Hz–1 Hz.

    GWs were first observed a decade ago and since then the LIGO–Virgo–KAGRA detectors have spotted GWs from hundreds of merging black holes and neutron stars. These detectors work in the 10 Hz–30 kHz range. Researchers have also had some success at observing a GW background at nanohertz frequencies using pulsar timing arrays.

    However, GWs have yet to be detected in the milli-Hz band, which should include signals from binary systems of white dwarfs, neutron stars, and stellar-mass black holes. Many of these signals would emanate from the Milky Way.

    Several projects are now in the works to explore these frequencies, including the space-based interferometers LISA, Taiji, and TianQin; as well as satellite-borne networks of ultra-precise optical clocks. However, these projects are still some years away.

    Multidisciplinary effort

    Joining these efforts was a collaboration called QSNET, which was within the UK’s Quantum Technology for Fundamental Physics (QTFP) programme. “The QSNET project was a network of clocks for measuring the stability of fundamental constants,” explains Giovanni Barontini at the University of Birmingham. “This programme brought together physics communities that normally don’t interact, such as quantum physicists, technologists, high energy physicists, and astrophysicists.”

    QTFP ended this year, but not before Barontini and colleagues had made important strides in demonstrating how milli-Hz GWs could be detected using optical cavities.

    Inside an ultrastable optical cavity, light at specific resonant frequencies bounces constantly between a pair of opposing mirrors. When this resonant light is produced by a specific atomic transition, the frequency of the light in the cavity is very precise and can act as the ticking of an extremely stable clock.

    “Ultrastable cavities are a main component of modern optical atomic clocks,” Barontini explains. “We demonstrated that they have reached sufficient sensitivities to be used as ‘mini-LIGOs’ and detect gravitational waves.”

    When such GW passes through an optical cavity, the spacing between its mirrors does not change in any detectable way. However, QSNET results have led to Barontini’s team to conclude that milli-Hz GWs alter the phase of the light inside the cavity. What is more, they conclude that this effect would be detectable in the most precise optical cavities currently available.

    “Methods from precision measurement with cold atoms can be transferred to gravitational-wave detection,” explains team member Vera Guarrera. “By combining these toolsets, compact optical resonators emerge as credible probes in the milli-Hz band, complementing existing approaches.”

    Ground-based network

    Their compact detector would comprise two optical cavities at 90° to each other – each operating at a different frequency – and an atomic reference at a third frequency. The phase shift caused by a passing gravitational wave is revealed in a change in how the three frequencies interfere with each other. The team proposes linking multiple detectors to create a global, ground-based network. This, they say, could detect a GW and also locate the position of its source in the sky.

    By harnessing this existing technology, the researchers now hope that future studies could open up a new era of discovery of GWs in the milli-Hz range, far sooner than many projects currently in development.

    “This detector will allow us to test astrophysical models of binary systems in our galaxy, explore the mergers of massive black holes, and even search for stochastic backgrounds from the early universe,” says team member Xavier Calmet at the University of Sussex. “With this method, we have the tools to start probing these signals from the ground, opening the path for future space missions.”

    Barontini adds, “Hopefully this work will inspire the build-up of a global network of sensors that will scan the skies in a new frequency window that promises to be rich of sources – including many from our own galaxy,”.

    The research is described in Classical and Quantum Gravity.

     

    The post Phase shift in optical cavities could detect low-frequency gravitational waves appeared first on Physics World.

    https://physicsworld.com/a/phase-shift-in-optical-cavities-could-detect-low-frequency-gravitational-waves/
    No Author

    The physics behind why cutting onions makes us cry

    It’s mostly to do with knife sharpness and cutting technique

    The post The physics behind why cutting onions makes us cry appeared first on Physics World.

    Researchers in the US have studied the physics of how cutting onions can produce a tear-jerking reaction.

    While it is known that volatile chemicals released from the onion – called propanethial S-oxide – irritate the nerves in the cornea to produce tears, how such chemical-laden droplets reach the eyes and whether they are influenced by the knife or cutting technique remain less clear.

    To investigate, Sunghwan Jung from Cornell University and colleagues built a guillotine-like apparatus and used high-speed video to observe the droplets released from onions as they were cut by steel blades.

    “No one had visualized or quantified this process,” Jung told Physics World. “That curiosity led us to explore the mechanics of droplet ejection during onion cutting using high-speed imaging and strain mapping.”

    They found that droplets, which can reach up to 60 cm high, were released in two stages – the first being a fast mist-like outburst that was followed by threads of liquid fragmenting into many droplets.

    The most energetic droplets were released during the initial contact between the blade and the onion’s skin.

    When they began varying the sharpness of the blade and the cutting speed, they discovered that a greater number of droplets were released by blunter blades and faster cutting speeds.

    “That was even more surprising,” notes Jung. “Blunter blades and faster cuts – up to 40 m/s – produced significantly more droplets with higher kinetic energy.”

    Another surprise was that refrigerating the onions prior to cutting also produced an increased number of droplets of similar velocity, compared to unchilled vegetables.

    So if you want to reduce chances of welling up when making dinner, sharpen your knives, cut slowly and perhaps don’t keep the bulbs in the fridge.

    The researchers say there are many more layers to the work and now plan to study how different onion varieties respond to cutting as well as how cutting could influence the spread of airborne pathogens such as salmonella.

    The post The physics behind why cutting onions makes us cry appeared first on Physics World.

    https://physicsworld.com/a/the-physics-behind-why-cutting-onions-makes-us-cry/
    Michael Banks

    Motion blur brings a counterintuitive advantage for high-resolution imaging

    New algorithm turns structured motion into sharper images

    The post Motion blur brings a counterintuitive advantage for high-resolution imaging appeared first on Physics World.

    Three pairs of greyscale images, showing text, a pattern of lines, and an image. The left images are blurred, the right images are clearer
    Blur benefit: Images on the left were taken by a camera that was moving during exposure. Images on the right used the researchers’ algorithm to increase their resolution with information captured by the camera’s motion. (Courtesy: Pedro Felzenszwalb/Brown University)

    Images captured by moving cameras are usually blurred, but researchers at Brown University in the US have found a way to sharpen them up using a new deconvolution algorithm. The technique could allow ordinary cameras to produce gigapixel-quality photos, with applications in biological imaging and archival/preservation work.

    “We were interested in the limits of computational photography,” says team co-leader Rashid Zia, “and we recognized that there should be a way to decode the higher-resolution information that motion encodes onto a camera image.”

    Conventional techniques to reconstruct high-resolution images from low-resolution ones involve relating low-res to high-res via a mathematical model of the imaging process. These effectiveness of these techniques is limited, however, as they produce only relatively small increases in resolution. If the initial image is blurred due to camera motion, this also limits the maximum resolution possible.

    Exploiting the “tracks” left by small points of light

    Together with Pedro Felzenszwalb of Brown’s computer science department, Zia and colleagues overcame these problems, successfully reconstructing a high-resolution image from one or several low-resolution images produced by a moving camera. The algorithm they developed to do this takes the “tracks” left by light sources as the camera moves and uses them to pinpoint precisely where the fine details must have been located. It then reconstructs these details on a finer, sub-pixel grid.

    “There was some prior theoretical work that suggested this shouldn’t be possible,” says Felzenszwalb. “But we show that there were a few assumptions in those earlier theories that turned out not to be true. And so this is a proof of concept that we really can recover more information by using motion.”

    Application scenarios

    When they tried the algorithm out, they found that it could indeed exploit the camera motion to produce images with much higher resolution than those without the motion. In one experiment, they used a standard camera to capture a series of images in a grid of high-resolution (sub-pixel) locations. In another, they took one or more images while the sensor was moving. They also simulated recording single images or sequences of pictures while vibrating the sensor and while moving it along a linear path. These scenarios, they note, could be applicable to aerial or satellite imaging. In both, they used their algorithm to construct a single high-resolution image from the shots captured by the camera.

    “Our results are especially interesting for applications where one wants high resolution over a relatively large field of view,” Zia says. “This is important at many scales from microscopy to satellite imaging. Other areas that could benefit are super-resolution archival photography of artworks or artifacts and photography from moving aircraft.”

    The researchers say they are now looking into the mathematical limits of this approach as well as practical demonstrations. “In particular, we hope to soon share results from consumer camera and mobile phone experiments as well as lab-specific setups using scientific-grade CCDs and thermal focal plane arrays,” Zia tells Physics World.

    “While there are existing systems that cameras use to take motion blur out of photos, no one has tried to use that to actually increase resolution,” says Felzenszwalb. “We’ve shown that’s something you could definitely do.”

    The researchers presented their study at the International Conference on Computational Photography and their work is also available on the arXiv pre-print server.

    The post Motion blur brings a counterintuitive advantage for high-resolution imaging appeared first on Physics World.

    https://physicsworld.com/a/motion-blur-brings-a-counterintuitive-advantage-for-high-resolution-imaging/
    Isabelle Dumé

    Hints of a boundary between phases of nuclear matter found at RHIC

    STAR collaboration homes in on critical point for quark–gluon plasma

    The post Hints of a boundary between phases of nuclear matter found at RHIC appeared first on Physics World.

    In a major advance for nuclear physics, scientists on the STAR Detector at the Relativistic Heavy Ion Collider (RHIC) in the US have spotted subtle but striking fluctuations in the number of protons emerging from high-energy gold–gold collisions. The observation might be the most compelling sign yet of the long-sought “critical point” marking a boundary separating different phases of nuclear matter. This similar to how water can exist in liquid or vapour phases depending on temperature and pressure.

    Team member Frank Geurts at Rice University in the US tells Physics World that these findings could confirm that the “generic physics properties of phase diagrams that we know for many chemical substances apply to our most fundamental understanding of nuclear matter, too.”

    A phase diagram maps how a substance transforms between solid, liquid, and gas. For everyday materials like water, the diagram is familiar, but the behaviour of nuclear matter under extreme heat and pressure remains a mystery.

    Atomic nuclei are made of protons and neutrons tightly bound together. These protons and neutrons are themselves made of quarks that are held together by gluons. When nuclei are smashed together at high energies, the protons and neutrons “melt” into a fluid of quarks and gluons called a quark–gluon plasma. This exotic high-temperature state is thought to have filled the universe just microseconds after the Big Bang.

    Smashing gold ions

    The quark–gluon plasma is studied by accelerating heavy ions like gold nuclei to nearly the speed of light and smashing them together. “The advantage of using heavy-ion collisions in colliders such as RHIC is that we can repeat the experiment many millions, if not billions, of times,” Geurts explains.

    By adjusting the collision energy, researchers can control the temperature and density of the fleeting quark–gluon plasma they create. This allows physicists to explore the transition between ordinary nuclear matter and the quark–gluon plasma. Within this transition, theory predicts the existence of a critical point where gradual change becomes abrupt.

    Now, the STAR Collaboration has focused on measuring the minute fluctuations in the number of protons produced in each collision. These “proton cumulants,” says Geurts, are statistical quantities that “help quantify the shape of a distribution – here, the distribution of the number of protons that we measure”.

    In simple terms, the first two cumulants correspond to the average and width of that distribution, while higher-order cumulants describe its asymmetry and sharpness. Ratios of these cumulants are tied to fundamental properties known as susceptibilities, which become highly sensitive near a critical point.

    Unexpected discovery

    Over three years of experiments, the STAR team studied gold–gold collisions at a wide range of energies, using sophisticated detectors to track and identify the protons and antiprotons created in each event. By comparing how the number of these particles changed with energy, the researchers discovered something unexpected.

    As the collision energy decreased, the fluctuations in proton numbers did not follow a smooth trend. “STAR observed what it calls non-monotonic behaviour,” Geurts explains. “While at higher energies the ratios appear to be suppressed, STAR observes an enhancement at lower energies.” Such irregular changes, he said, are consistent with what might happen if the collisions pass near the critical point — the boundary separating different phases of nuclear matter.

    For Volodymyr Vovchenko, a physicist at the University of Houston who was not involved in the research, the new measurements represent “a major step forward”. He says that “the STAR Collaboration has delivered the most precise proton-fluctuation data to date across several collision energies”.

    Still, interpretation remains delicate. The corrections required to extract pure physical signals from the raw data are complex, and theoretical calculations lag behind in providing precise predictions for what should happen near the critical point.

    “The necessary experimental corrections are intricate,” Vovchenko said, and some theoretical models “do not yet implement these corrections in a fully consistent way.” That mismatch, he cautions, “can blur apples-to-apples comparisons.”

    The path forward

    The STAR team is now studying new data from lower-energy collisions, focusing on the range where the signal appears strongest. The results could reveal whether the observed pattern marks the presence of a nuclear matter critical point or stems from more conventional effects.

    Meanwhile, theorists are racing to catch up. “The ball now moves largely to theory’s court,” Vovchenko says. He emphasizes the need for “quantitative predictions across energies and cumulants of various order that are appropriate for apples-to-apples comparisons with these data.”

    Future experiments, including RHIC’s fixed-target program and new facilities such as the FAIR accelerator in Germany, will extend the search even further. By probing lower energies and producing vastly larger datasets, they aim to map the transition between ordinary nuclear matter and quark–gluon plasma with unprecedented precision.

    Whether or not the critical point is finally revealed, the new data are a milestone in the exploration of the strong force and the early universe. As Geurts put it, these findings trace “landmark properties of the most fundamental phase diagram of nuclear matter,” bringing physicists one step closer to charting how everything  – from protons to stars – first came to be.

    The research is described in Physical Review Letters.

    The post Hints of a boundary between phases of nuclear matter found at RHIC appeared first on Physics World.

    https://physicsworld.com/a/hints-of-a-boundary-between-phases-of-nuclear-matter-found-at-rhic/
    No Author