2030-2039
The High-Definition Space Telescope (HDST) is operational
The High-Definition Space Telescope (HDST) is a major new space observatory that is placed at Sun-Earth Lagrange point 2, orbiting the Sun about a million miles from Earth. It was proposed in 2015 by the Association of Universities for Research in Astronomy (AURU), the organisation running Hubble and other telescopes on behalf of NASA. Reviewed by the National Academy of Sciences in 2020 and subsequently approved by Congress, the HDST is deployed and operational during the 2030s.* With a diameter of 11.7 metres, it is much larger than both Hubble (2.4 m) and the James Webb telescope (6.5 m).
The HDST is designed to locate dozens of Earthlike planets in our local stellar neighbourhood. It is equipped with an internal coronagraph – a disk that blocks light from the central star, making a dim planet more visible. A starshade is eventually added that can float miles out in front of it to perform the same function. Exoplanets are imaged in direct visible light, as well as being spectroscopically analysed to determine their atmospheres and confirm the presence of water, oxygen, methane, and other organic compounds.
Tens of thousands of exoplanets have been catalogued since Kepler and other missions of the previous decades. With attention now focused on the most promising candidates for biosignatures, the possibility of detecting the first signs of alien life is greatly increased during this time.
The HDST is 100 times more sensitive than Hubble. Peering into the deep universe, it can resolve objects only 300 light years in diameter, located at distances of 10 billion light years – the nucleus of a small galaxy, for example, or a gas cloud on the way to forming a new star system.* It can study extremely faint objects, up to 20 times dimmer than anything that can be seen from large, ground-based telescopes.
The UV sensitivity of the HDST can be used to map the distribution of hot gases lying outside the perimeter of galaxies. This reveals the structure of the so-called “cosmic web” that galaxies are embedded inside, and shows how chemically enriched gases flow in and out of galaxies to fuel star formation. Individual stars like our Sun can be picked out from 30 million light years away.
Closer to home, the HDST is capable of imaging many features in our own Solar System with spectacular resolution and detail, such as the icy plumes from Europa and other moons, or weather conditions on the gas giants. It can search for remote, hidden members of our Solar System in the Kuiper Belt and beyond. The total cost of the telescope is approximately $10 billion.
Image credit: D. Ceverino, C. Moody, and G. Snyder, and Z. Levay (STScI)
2030-2033
NASA’s Europa Clipper mission searches for life
The Europa Clipper is a NASA probe sent to study Europa, the smallest of the four Galilean moons orbiting Jupiter. As a Flagship-class mission, it is among the costliest and most capable science spacecraft to be launched in the agency’s history.*
The uncrewed spacecraft departs from Earth in October 2024 aboard a Falcon Heavy, during a 21-day launch window. It utilises gravity assists from Mars in February 2025 and Earth in December 2026, before arriving at Europa in April 2030.*
The probe is designed to observe Europa, determine its habitability and aid in the selection of a landing site for a future lander. The science goals are focused on the three main requirements for life: liquid water, chemistry, and energy. Specifically, the objectives are to study:
- Ice shell and ocean: Confirm the existence and nature of water, within or beneath the ice, and processes of surface-ice-ocean exchange
- Composition: Determine the chemistry and distribution of key compounds and the links to ocean composition
- Geology: Determine the characteristics and formation of surface features, including sites of recent or current activity
To achieve these goals, a large scientific payload of nine instruments is contributed by the Jet Propulsion Laboratory (JPL), as well as various research institutes and universities. The electronic components are protected from Jupiter’s intense radiation by a 150 kg shield made of titanium and aluminium. They include a topographical imager, ice-penetrating radar, thermal spectrometer, magnetometer, neutral mass spectrometer and high-gain antenna. Extremely high-resolution photos are made possible by the main imaging system. This maps most of Europa at 50 m (160 ft) resolution, but can also zoom into selected surface areas. These enhancements reveal details as small as 0.5 metres (1.6 ft) in size.
The probe conducts 45 flybys of Europa at distances ranging from 2,700 km (1,678 mi) to as close as 25 km (16 mi) during its 3.5-year mission. It can therefore reach altitudes low enough to pass through plumes of water vapour erupting from the moon’s ice crust, obtaining samples for analysis.
A key feature of the mission plans is that the Clipper uses gravity assists from Europa, Ganymede and Callisto to change its trajectory, allowing the spacecraft to return to a different close approach point with each flyby. Each flyby covers a different sector of Europa in order to produce a near-global (95%) topographic survey including ice thickness.
The mission timeline overlaps with ESA’s Jupiter Icy Moons Explorer (JUICE), which studies the Jovian system from 2029 to 2034, performing flybys past Europa and Callisto before moving into orbit around Ganymede. The two missions complement each other – with shared data helping to improve the science surrounding the moon’s crust and subsurface ocean (the latter is believed to contain more water than all of Earth’s oceans combined), as well as guiding the development of future surface landers in the 2030s and 2040s.
Credit: NASA/JPL-Caltech
2030
Global
population is reaching crisis point
The environmental impacts of population growth and industrial expansion are becoming alarmingly obvious by 2030, with increasingly worrying signs of impending food, water and other resource crises. During the
early 2000s the world had a population of six billion. By 2030, another two billion have been born, most from poor countries. Humanity’s
footprint is such that it now requires the equivalent of
two whole Earths to sustain itself in the long term.*
The extra
one-third of human beings on the planet is placing enormous pressure on natural habitats, which continue to be degraded at a far faster rate than they can be replenished.
With carbon dioxide levels now approaching the grim milestone of 450 parts per million (ppm), climate feedback loops are emerging with greater frequency, particularly in the Arctic, where the melting permafrost is now venting almost a gigatonne of carbon annually.** In some regions, crop yields are falling by up to one third* and the prices of some crops are more than doubling,* with devastating impacts on the world’s poor. This is threatening to undermine fragile social, economic and security conditions in parts of the Middle East, Asia and
Africa.
The urban population, which stood at 3.5 billion in 2010, has now risen to almost 5 billion. Resource scarcity, economic and political factors, and mounting environmental issues are forcing people into ever more crowded and high-density places. Some cities are merging to form sprawling metropolitan areas with tens of millions of people. In some nations, those living in urban regions make up over 90% of the population.*
By 2030, urban areas occupy an additional 463,000 sq mi (741,000 sq km) globally, relative to 2012. This is equivalent to more than 20,000 new football fields being added to the global urban area every day for the first three decades of the 21st century. Almost $30 trillion has been spent during the last two decades on transportation, utilities and other infrastructure. Some of the most substantial growth has been in China, which boasts an urban population approaching a billion and has spent $100 billion annually just on its own projects. Much of the Chinese coastline has been transformed into what is essentially a giant urban corridor. Turkey is another region that has witnessed phenomenal urban development.
Global forecasts of urban expansion to 2030. Credit: Boston University’s Department of Geography and Environment
All of this expansion is having a major impact on the surrounding environment. In addition to cities, major new networks of road and rail are being built, crisscrossing landscapes and cutting through wildlife zones, such as national parks* and forests. The Amazon continues to be opened up for resource exploitation and food production. Numerous species are reclassified as endangered during this period as a result of human encroachment, pollution and habitat destruction.
Despite the ongoing degradation of the natural world, there are encouraging signs in certain areas of industry – such as the now rapid migration from fossil fuels to renewable energy. Advances in nanotechnology
have resulted in greatly improved solar power efficiency. In some countries, photovoltaic materials are being added to almost every new building.* Plastic pollution is also being tackled, with single-use plastics being banned, new biodegradable plastic types seeing increasingly widespread use,* and improved recycling rates.
Nevertheless, the world faces a crisis unparalleled in history, with tipping points fast approaching. Only a few more decades remain for humanity to make its transition to a more sustainable economic paradigm.
China’s Long March 9 rocket begins lunar missions
The Long March 9 (officially CZ-9) is a new Chinese rocket, first announced in 2018 and intended for long range missions to the Moon, Mars and beyond. With a payload capacity of 140,000 kg to low Earth orbit (LEO) and 50,000 kg to trans-lunar injection, it ranks among the largest rockets ever built – one of the very few appearing in the “super heavy-lift” launch vehicle class.
The Long March 9 is a three-staged rocket with a large core having a diameter of 10 metres, surrounded by a cluster of four engines. Comparable in size to NASA’s retired Saturn V, this huge rocket is specifically designed to expand China’s capabilities beyond Earth and deeper into space. Sitting atop the rocket is a next-generation, lunar-capable spacecraft with capacity for up to six astronauts.*
The Long March 9 completed feasibility studies in 2021 and received government approval that same year. The 14th Five Year Plan (2021–25) enabled it to proceed to the next stage of development. By 2030, a maiden test flight has occurred, and the launch vehicle is being prepared for use in lunar missions.* Following additional test flights, China lands its first astronauts on the Moon in the early part of this decade.*
This is taking place alongside similar efforts by the United States, which now has its own lunar-capable rocket – NASA’s Space Launch System – as well as commercial ventures such as SpaceX and Blue Origin. The two nations, having been engaged in a second space race for the last two decades, are now finally seeing the fruits of their long-term research and development.
The Long March 9 forms a pivotal part of China’s operations on the Moon, not only for sending astronauts on short duration missions but also for establishing a more permanent presence. Its huge cargo-carrying capacity allows a scientific outpost to form on the lunar surface during the late 2030s.
Credit: CCTV
The 6G standard is released
By 2030, a new cellular network standard has emerged that offers even greater speeds than 5G. Early research on this sixth generation (6G) had started during the late 2010s when China,* the USA* and other countries investigated the potential for working at higher frequencies.
Whereas the first four mobile generations tended to operate at between several hundred or several thousand megahertz, 5G had expanded this range into the tens of thousands and hundreds of thousands. A revolutionary technology at the time, it allowed vastly improved bandwidth and lower latency. However, it was not without its problems, as exponentially growing demand for wireless data transfer put ever-increasing pressure on service providers, while even shorter latencies were required for certain specialist and emerging applications.*
This led to development of 6G, based on frequencies ranging from 100 GHz to 1 THz and beyond. A ten-fold boost in data transfer rates would mean users enjoying terabits per second (Tbit/s). Furthermore, improved network stability and latency – achieved with AI and machine learning algorithms – could be combined with even greater geographical coverage. The Internet of Things, already well-established during the 2020s, now had the potential to grow by further orders of magnitude and connect not billions, but trillions of objects.
Following a decade of research and testing, widespread adoption of 6G occurs in the 2030s. However, wireless telecommunications are now reaching a plateau in terms of progress, as it becomes extremely difficult to extend beyond the terahertz range.* These limits are eventually overcome, but require wholly new approaches and fundamental breakthroughs in physics. The idea of a seventh standard (7G) is also placed in doubt by several emerging technologies that support the existing wireless communications, making future advances iterative, rather than generational.*
Desalination has exploded in use
A combination of increasingly severe droughts, aging infrastructure and the depletion of underground aquifers is now endangering millions of people around the world. The on-going population growth described earlier is only exacerbating this, with global freshwater supplies continually stretched to their limits. This is forcing a rapid expansion of desalination technology.
The idea of removing salt from saline water had been described as early as 320 BC.* In the late 1700s it was used by the U.S. Navy, with solar stills built into shipboard stoves. It was not until the 20th century, however, that industrial-scale desalination began to emerge, with multi-flash distillation and reverse osmosis membranes. Waste heat from fossil fuel or nuclear power plants could be used, but even then, these processes remained prohibitively expensive, inefficient and highly energy-intensive.
By the early 21st century, the world’s demand for resources was growing exponentially. The UN estimated that humanity would require over 30 percent more water between 2012 and 2030.* Historical improvements in freshwater production efficiency were no longer able to keep pace with a ballooning population,* made worse by the effects of climate change.
New methods of desalination were seen as a possible solution to this crisis and a number of breakthroughs emerged during the 2000s and 2010s. One such technique – of particular benefit to arid regions – was the use of concentrated photovoltaic (CPV) cells to create hybrid electricity/water production. In the past, these systems had been hampered by excessive temperatures which made the cells inefficient. This issue was overcome by the development of water-filled micro-channels, capable of cooling the cells. In addition to making the cells themselves more efficient, the heated waste water could then be reused in desalination. This combined process could reduce cost and energy use, improving its practicality on a larger scale.*
Breakthroughs like this and others, driven by huge levels of investment, led to a substantial increase in desalination around the world. This trend was especially notable in the Middle East and other equatorial regions; home to both the highest concentration of solar energy and the fastest growing demand for water.
However, this exponential progress was dwarfed by the sheer volume of water required by an ever-expanding global economy, which now included the burgeoning middle classes of China and India. The world was adding an extra 80 million people each year – equivalent to the entire population of Germany.* By 2017, Yemen was in a state of emergency, with its capital almost entirely depleted of groundwater.* Significant regional instability began to affect the Middle East, North Africa and South Asia, as water resources became weapons of war.*
Amid this turmoil, even greater advances were being made in desalination. It was acknowledged that present trends in capacity – though impressive compared to earlier decades – were insufficient to satisfy global demand and therefore a major, fundamental breakthrough would be needed on a large scale.*
Nanotechnology offered just such a breakthrough. The use of graphene in the water filtration process had been demonstrated in the early 2010s.** This involved atom-thick sheets of carbon, able to separate salt from water using much lower pressure, and hence, much lower energy. This was due to the extreme precision with which the perforations in each graphene membrane could be manufactured. At only a nanometre across, each hole was the perfect size for a water molecule to fit through. An added benefit was the very high durability of graphene, potentially making desalination plants more reliable and longer-lasting.
Unfortunately, patents were secured by corporations that initially limited its wider use. A number of high-profile international lawsuits were brought, as entrepreneurs and companies attempted to develop their own versions. With a genuine crisis unfolding, this led to an eventual restructuring of intellectual property rights. By 2030, graphene-based filtration systems have closed most of the gap between supply and demand, easing the global water shortage.* However, the delayed introduction of this revolutionary technology has caused problems in many vulnerable parts of the world.
In the 2040s* and beyond, desalination will play an even more crucial role, as humanity adapts to a rapidly changing climate. Ultimately, it will become the world’s primary source of freshwater, as non-renewable sources like fossil aquifers are depleted around the globe.
Smart grid technology is widespread in developed nations
In prior decades, the disruptive effects of energy shocks,* alongside ever-increasing demands of growing and industrialising populations, were putting strain on the world’s power grids. Blackouts occurred in the worst-hit regions, with consumers becoming more and more conscious of their energy use and taking measures to either monitor and/or cut back their consumption. This already precarious situation was exacerbated by the relatively ancient infrastructure in many countries. Much of the grid at the beginning of the 21st century was extremely old and inefficient, losing more than half of its available electricity during production, transmission and usage. A convergence of business, political, social and environmental issues forced governments and regulators to finally address this problem.
By 2030, integrated smart grids are becoming widespread in the developed world,** the main benefit of which is the optimal balancing of demand and production. Traditional power grids had previously relied on a just-in-time delivery system, where supply was manually adjusted constantly in order to match demand. Now, this problem is being eliminated due to a vast array of sensors and automated monitoring devices embedded throughout the grid. This approach had already emerged on a small scale, in the form of smart meters for individual homes and offices. By 2030, it is being scaled up to entire national grids.
Power plants now maintain constant, real-time communication with all residents and businesses. If capacity is ever strained, appliances instantly self-adjust to consume less power, even turning themselves off completely when idle and not in use. Since balancing demand and production is now achieved on a real-time, automatic basis within the grid itself, this greatly reduces the need for “peaker” plants as supplemental sources. In the event of any remaining gap, algorithms calculate the exact requirements and turn on extra generators automatically.
Computers also help adjust for and level out peaks and troughs in energy demand. Sensors in the grid can detect precisely when and where consumption is highest. Over time, production can be automatically shifted according to the predicted rise and fall in demand. Smart meters can then adjust for any discrepancies. Another benefit of this approach is allowing energy providers to raise electricity prices during periods of high consumption, helping to flatten out peaks. This makes the grid more reliable overall, since it reduces the number of variables that need to be accounted for.
Yet another advantage of the smart grid is its capacity for bidirectional flow. In the past, power transmission could only be done in one direction. Today, a proliferation of local power generation, such as photovoltaic panels and fuel cells, means that energy production is much more decentralised. Smart grids now take into account homes and businesses which can add their own surplus electricity to the system, allowing energy to be transmitted in both directions through power lines.
This trend of redistribution and localisation is also making large-scale renewables more viable, since the grid is now adaptable to the intermittent power output of solar and wind. On top of this, smart grids are also designed with multiple full load-bearing transmission routes. This way, if a broken transmission line causes a blackout, sensors instantly locate the damaged area while electricity is rerouted to the affected area. Crews no longer need to investigate multiple transformers to isolate a problem, and blackouts are reduced as a result. This also prevents any kind of domino effect from setting off a rolling blackout.
Overall, this new “internet of energy” is far more sustainable, efficient and reliable. Energy costs are reduced, while paving the way to a post-carbon economy. Countries that quickly adapt smart grids are better protected from oil shocks, while greenhouse gas emissions are reduced by almost 20 per cent in some nations.* As the shift to clean energy continues, this situation will only improve, expanding to even larger scales. Regions begin merging their grids together on a country-to-country, and eventually continent-wide, basis.
Coal power is phased out by Germany
During the 20th century, Germany obtained its electricity predominantly from fossil fuels (particularly coal) and then later also nuclear power. As Europe’s largest consumer of electricity, it had very high carbon emissions, ranking sixth in the world. At the dawn of the 21st century, however, a radical change began to occur as its supply shifted to new, less polluting forms of energy.
In 2010, the German government published Energiewende (“energy transition”) a key policy document outlining targets for increasing the share of renewables in power consumption, which included greenhouse gas (GHG) emission reductions of 80–95% by 2050 (relative to 1990).
Following the Japanese Fukushima disaster of 2011, the government removed the use of nuclear power as a bridging technology and the decision was taken further to phase out nuclear altogether by 2022. This move triggered a brief rise in coal, to make up the shortfall. However, renewables were expanding rapidly, with solar and wind forming an ever-larger share of electric generating capacity. In 2019, a government-appointed coal commission introduced a proposed pathway to phase out all coal power within two decades.
By the end of the 2010s, Germany had 40 GW of installed coal power capacity, with 21 GW fired by bituminous coal – referred to as “hard coal” by Germany’s Federal Network Agency – and 19 GW by lignite, or “brown coal”. A bituminous coal plant, Dattaln 4, entered service in mid-2020, adding 1.1 GW and becoming Germany’s last ever coal plant to be newly connected to the grid. The government planned to take all 84 sites offline by 2038.
In September 2021, Germany held federal elections. With Angela Merkel stepping down and the ruling Union parties (CDU/CSU) recording their worst ever result, Olaf Scholz of the Social Democratic Party (SPD) formed a three-way coalition alongside the Free Democratic Party (FDP) and the Greens. As part of this deal, Germany’s previous commitment to phase out coal power would be brought forward by eight years, from 2038 to 2030.
During the first half of the 2020s, many plants voluntarily went offline in the north, west and south of the country. As renewables continued to surge in capacity, forced closures occurred in the latter part of the decade. The phase out had commenced in western Germany, to soften the impact in the economically poorer eastern side of the country.
By 2030, the final plant shutdown has occurred. More than 80% of Germany’s electricity is now generated by renewables. As part of this transition, it is now compulsory for solar energy to be included on the roofs of all new commercial buildings, while each of the country’s 16 states must provide at least 2% of their land area for wind power. Around 15 million of Germany’s cars are now electric, as the European Union nears its target of phasing out new cars with internal combustion engines by 2035. Germany continues to make progress in reducing greenhouse gas emissions, on its way to net zero by 2045.*
An interstellar message arrives at Luyten’s Star
Luyten’s Star (GJ 273) is a red dwarf located about 12.4 light-years from Earth. Despite its relatively close proximity, it has a visual magnitude of only 9.9, making it too faint to be seen with the naked eye. It was named after Willem Luyten, who, in collaboration with Edwin G. Ebbighausen, first determined its high proper motion in 1935. Luyten’s star is one-quarter the mass of the Sun and has 35% of its radius.
In March 2017, two planets were discovered orbiting Luyten’s Star. The outer planet, GJ 273b, was a “Super Earth” with 2.9 Earth masses and found to be lying in the habitable zone, with potential for liquid water on the surface. The inner planet, GJ 273c, had 1.2 Earth masses, but orbited much closer, with an orbital period of only 4.7 days.
In October 2017, a project known as “Sónar Calling GJ 273b” was initiated. This would send music through deep space in the direction of Luyten’s Star in an attempt to communicate with extraterrestrial intelligence. The project – organised by Messaging Extraterrestrial Intelligence (METI) and Sónar (a music festival in Barcelona, Spain) – beamed a series of radio signals from a radar antenna at Ramfjordmoen, Norway. The first transmissions were sent on 16th, 17th and 18th October, with a second batch in April 2018.
This became the first radio message ever sent to a potentially habitable exoplanet. The message included 33 music pieces of 10 seconds each, by artists including Autechre, Jean Michel Jarre, Kate Tempest, Kode 9, Modeselektor and Richie Hawtin. Also included were scientific and mathematical tutorials sent in binary code, designed to be understandable by extraterrestrials; a recording of an unborn baby girl’s heartbeat; along with poetry and political statements about humans.
Due to the lag from light speed over a distance of 70 trillion miles, the earliest possible date for a response to arrive back would be 2042.*
Credit: Sonar
Depression is the number one global disease burden
When measured by years of life lost, depression has now overtaken heart disease to become the leading global disease burden.* This includes both years lived in a state of poor health and years lost due to premature death. Principle causes of depression include debt worries, unemployment, crime, violence (especially family violence), war, environmental degradation and disasters. The on-going economic stagnation around the world is a major contributing factor. However, progress is being made with destigmatising mental illness.*
Child mortality is approaching 2% globally
Childhood mortality, defined as the number of children dying under the age of five, was a major issue during the late 20th century. In 1970, more than 14% of children worldwide never saw their 5th birthday, while in Africa the figure was even higher at over 24%. The gap between rich and poor nations was staggering, with a mortality rate of only 24 per 1,000 live births in the most industrialised countries, an order of magnitude lower.*
Improvements in medicine, education, economic opportunity and living standards led to a fall in child deaths over subsequent decades. More and more children were being saved by low-tech, cost-effective, evidence-based measures. These included vaccines, antibiotics, micronutrient supplementation, insecticide-treated bed nets, improved family care and breastfeeding practices, and oral rehydration therapy. The empowerment of women, the removal of social and financial barriers to accessing basic services, new innovations that made the supply of critical services more available to the poor and increasing local accountability were policy interventions that reduced mortality and improved equity.
The U.N.’s Millennium Development Goals included the ambitious target of reducing by two-thirds (between 1990 and 2015) the number of children dying under age five. While this goal failed to be met in time, the progress achieved was still significant – a drop from 92 to 43 deaths per 1,000 live births. Public, private and non-profit organisations, keen to build on their experience and ensure the continuation of this trend, made childhood survival a focus of the new sustainable development agenda for 2030. A new objective was set, which aimed to lower the under-five mortality figure to less than 25 per 1,000 live births worldwide.*
With ongoing improvements in public health and education – aided by widespread access to the Internet in developing regions* – this new goal was largely met, with further declines in childhood mortality from 2015 to 2030. Although some regions in Africa still have unacceptably high rates, the overall worldwide figure is around 2% by 2030.*
One recent development now having a major impact is the mass application of gene drives to control mosquito populations, greatly reducing the number of malaria cases.* Huge advances have also been made in the prevention and treatment of HIV, which is no longer the death sentence it used to be. Some diseases have been eradicated by now including polio, Guinea worm, elephantiasis, river blindness, and blinding trachoma.*
However, the progress achieved in recent decades is now threatened by the worsening problems of climate change and other environmental issues, along with antibiotic resistance.* Even discounting these emerging threats, it is simply impractical and impossible to prevent every childhood death with current levels of technology and surveillance. As such, childhood mortality begins to taper off – not reaching zero until much further into the future.
The Muslim population has increased significantly
By 2030, the Muslim share of the global population has reached 26.4%. This compares with 19.1% in 1990.* Countries which have seen the largest growth rates include Ireland (190.7%), Canada (183.1%), Finland (150%), Norway (149.3%), New Zealand (146.3%) the United States (139.5%) and Sweden (120.2%). Those which have experienced the biggest falls include Lithuania (-33.3%), Moldova (-13.3%), Belarus (-10.5%), Japan (-7.6%), Guyana (-7.3%), Poland (-5.0%) and Hungary (-4.0%).
A number of factors have driven this trend. Firstly, Muslims have higher fertility rates (more children per woman) than non-Muslims. Secondly, a larger share of the Muslim population has entered – or is entering – the prime reproductive years (ages 15-29). Thirdly, health and economic gains in Muslim-majority countries have resulted in greater-than-average declines in child and infant mortality rates, with life expectancy improving faster too.
Despite an increasing share of the population, the overall rate of growth for Muslims has begun to slow when compared with earlier decades. Later this century, both Muslim and non-Muslim numbers will approach a plateau as the global population stabilises.* The spread of democracy* and improved access to education* are emerging as major factors in the slowing fertility rates (though Islam has yet to undergo the sort of renaissance and reformation that Christianity went through).
Sunni Muslims continue to make up the overwhelming majority (90%) of Muslims in 2030. The portion of the world’s Muslims who are Shia has declined slightly, mainly because of relatively low fertility in Iran, where more than a third of the world’s Shia Muslims live.
Orbital space junk is becoming a major problem for space flight
Space junk – debris left in orbit from human activities – has been steadily building in low-Earth orbit for more than 70 years. It is made up of everything from spent rocket stages, to defunct satellites, to debris left over from accidental collisions. The size of space junk can reach up to several metres, but is most often miniscule particles such as metal shavings and paint flecks. Despite their small size, such pieces of debris often sustain speeds of 30,000 mph – easily fast enough to deal significant damage to a spacecraft. Satellites, rockets and space stations, as well as astronauts conducting spacewalks, have all had to cope with the increasing damage caused by collisions with these particles.
One of the biggest issues with space junk is the fact that it grows exponentially. This trend, along with the increasing number of countries entering space, has made orbital collisions happen almost regularly in recent years. The newest space-faring nations have been particularly affected.
Events similar to the 2009 collision of the US Iridium and Russian Kosmos satellites have raised fears of the so-called Kessler Syndrome. This scenario is where space junk reaches a critical mass, triggering a chain reaction of collisions until virtually every satellite and man-made object in an orbital band has been reduced to debris. Such an event could destroy the global economy and render future space travel almost impossible.
By 2030, the amount of space junk in orbit has tripled, compared to 2011.* Countless millions of fragments can now be found at various levels of orbit. A new generation of shielding for spacecraft and rockets is being developed, along with tougher and more durable space suits for astronauts. This includes the use of “self-healing” nanotechnology materials, though expenses are too high to outfit everything.
Larger chunks of debris have also been impacting on Earth itself more frequently. Though most land in the ocean (since the planet’s surface is 70% covered by water), a few crash on land, necessitating early warning systems for people in the affected areas.
Increased regulation has begun to mitigate the growth of space debris, while better shielding and repair technology has reduced the frequency of damage. Increased computing power and tracking systems are also helping to predict the path of debris and instruct spacecraft to avoid the most dangerous areas. Options to physically move debris are also being deployed – including nets and harpoons fired from small satellites, along with ground-based lasers that can push junk into decaying orbits so it burns up in the atmosphere. Despite this, space junk remains an expensive problem for now.
The UK space industry has quadrupled in size
In 2010, the UK government established the United Kingdom Space Agency (UKSA). This replaced the British National Space Centre and took over responsibility for key budgets and policies on space exploration – representing the country in all negotiations on space matters and bringing together all civil space activities under one single management.
By 2014, the UK’s thriving space sector was contributing over £9 billion ($15.2 billion) to the economy each year and directly employing 29,000 people, with an average growth rate of almost 7.5%. Recognising its strong potential, the government backed plans for a fourfold expansion of the industry.* New legal frameworks allowed a spaceport to be established in the UK – triggering growth of space tourism, launch services and other hi-tech companies.
By 2030, the UK has become a major player in the space industry, with a global market share of 10%. Having quadrupled in size, its space industry now contributes £40 billion ($67 billion) a year to the economy and has generated over 100,000 new high-skilled jobs.* The UK has significantly increased its leadership and influence in crucial areas like satellite communications, Earth observation, disaster relief and climate change monitoring. The growth of space-based products and services means the UK is now among the first 100% broadband-enabled countries in the world.* This has also reduced the costs of delivering government services to all citizens, regardless of their location.
The Lockheed Martin SR-72 enters service
The SR-72 is an unmanned, hypersonic aircraft intended for intelligence, surveillance and reconnaissance. Developed by Lockheed Martin, it is the long-awaited successor to the SR-71 Blackbird that was retired in 1998. The plane combines both a traditional turbine and a scramjet to achieve speeds of Mach 6.0, making it twice as fast as the SR-71 and capable of crossing the Atlantic Ocean in under an hour. A scaled demonstrator was built and tested in 2018. This was followed by a full-size demonstrator in 2023 and then entry into service by 2030.* The SR-72 is similar in size to the SR-71, at approximately 100 ft (30 m) long. With an operational altitude of 80,000 feet (24,300 metres), combined with its speed, the SR-72 is almost impossible to shoot down.
Credit: Lockheed Martin
Half of America’s shopping malls have closed
For much of the 20th century, shopping malls were an intrinsic part of American culture. At their peak in the mid-1990s, the country was building 140 new shopping malls every year. But from the early 2000s onward, underperforming and vacant malls – known as “greyfield” and “dead mall” estates – became an emerging problem. In 2007, a year before the Great Recession, no new malls were built in America, for the first time in half a century. Only a single new mall, City Creek Center Mall in Salt Lake City, was built between 2007 and 2012. The economic health of surviving malls continued to decline, with high vacancy rates creating an oversupply glut.*
A number of changes had occurred in shopping and driving habits. More and more people were living in cities, with fewer interested in driving and people in general spending less than before. Tech-savvy Millennials (also known as Generation Y), in particular, had embraced new ways of living. The Internet had made it far easier to identify the cheapest products and to order items without having to be physically there in person. In earlier decades, this had mostly affected digital goods such as music, books and videos, which could be obtained in a matter of seconds – but even clothing was eventually possible to download, thanks to the proliferation of 3D printing in the home.* Many of these abandoned malls are now being converted to other uses, such as housing.
The metaverse has reached $5 trillion in size
The metaverse is a network of online 3D worlds, accessed via the use of virtual reality (VR) and augmented reality (AR) headsets. The 2003 virtual world platform Second Life is often described as the first metaverse, as it incorporated aspects of social media into a persistent three-dimensional world with the user represented as an avatar.
Over the years, the metaverse evolved and grew to attract many more users, becoming essentially the next major iteration of the Internet. By 2021, it had reached an inflection point, with global investment of $57 billion, a figure that more than doubled to over $120 billion just a year later.
By 2030, the metaverse has reached a market size of $5 trillion and continues to grow.* The biggest revenue generators are e-commerce ($2.6 trillion), ahead of sectors such as virtual learning ($270 billion), advertising ($206 billion), and gaming ($125 billion). Almost all the major retail brands now have their own virtual store or shop front, where many items can be browsed and interacted with, along with a sizeable number of small and medium-sized businesses (SMEs).
Education, training, conferences, and business meetings in VR are now commonplace. Health and fitness is another major area of use, with treadmill walkers and runners able to, for example, move across the simulated surface of other worlds if they choose, or myriad cities and locations on Earth, perhaps even including different periods of history. Meanwhile, advertising is more interactive (and some would say intrusive) than ever before.
Gaming in VR had already been an option in previous decades, though expensive and with a somewhat limited selection of games. This situation has now improved greatly, with a vast range of titles on offer, alongside greater opportunities for meeting and connecting with fellow players. Many of the environments featured in these virtual experiences are created and maintained by proto-artificial general intelligences (AGIs), which can auto-generate objects, content, and even whole storylines without a human programmer. However, individual and community-generated worlds are just as popular.
Like the explosion in sales of smartphones during the 2010s, the metaverse has entered the mainstream through a combination of technological advances and rapidly falling hardware costs. VR and AR headsets are now cheap and accessible to billions of people, often providing up to 8K resolution per eye.
8K VR headsets are common
8K displays (amounting to 33 MP of resolution per eye) are a fairly standard feature of virtual reality (VR) in 2030. These offer quadruple the pixel count of the best consumer VR products from a decade earlier.
Following a long period with little or no activity, the VR industry saw a major revival from around 2015 onwards. A prototype of the Oculus Rift and its subsequent commercial release led to dozens of competitors within a few years, including models with better resolution and fields of view (FOV).
Having initially been a somewhat expensive and niche form of entertainment, VR declined greatly in cost during the 2020s. The COVID-19 pandemic accelerated its mainstream adoption.
By 2030, the quality of VR has improved exponentially.* The latest screens now provide breathtaking detail and realism, ultra-low latency, and wide FOV, while a variety of new features are combining to enhance the level of immersion and interactivity still further. For example, most headsets now include as standard the option for a brain-computer interface (BCI) to record users’ electrical signals, enabling actions to be directed by merely thinking about them.* Such technology had already begun to emerge some years previously, but has now improved greatly in terms of speed, accuracy and ubiquity.
Non-invasive sensors placed on the scalp are by far the preferred choice for mainstream BCI use. However, more advanced options for invasive BCIs have now begun to emerge, as the technology shifts from purely clinical uses (such as treating paralysis) and into business, leisure and entertainment.* Although still at a niche and experimental phase of development, early adopters willing to undergo surgery and have electrodes touching the surface of their brain can use bidirectional links for both reading and writing information to their neocortex.
In VR gaming, these more invasive BCIs can increase the level of immersion, tricking the senses in ways that bring a player closer to the action. New visual, auditory and tactile sensations are made possible by stimulating both the motor and visual cortex.* These effects are rather limited at this stage and exploited by only the most hardcore gamers – but provide more lifelike ways of interacting with simulated people, objects and environments.
This decade sees much progress with BCI technology as the number of electrodes used in the implants grows by leaps and bounds, enabling larger and more complex brain patterns to be recorded and decoded. In addition to gaming, BCIs gain popularity from the enhancement of wellness functions, such as for guided meditation and the improvement of sleep quality. At the same time, ethical issues are emerging over consent, privacy, identity, and agency, especially when BCIs are combined with AI.
100 terabyte HDDs reach consumer level
As of 2020, consumer-level hard disk drives (HDDs) featured capacities up to a maximum of about 18 terabytes (TB). While solid state drives (SSDs) had been available with greater storage sizes for enterprise-grade users, as well as faster transfer speeds, traditional HDDs remained a relevant and attractive alternative thanks to their much lower costs.
With perpendicular magnetic recording (PMR) approaching its limits, even boosted with two-dimensional magnetic recording (TDMR), the capacity of HDDs had seen a reduced rate of growth in the 2010s. However, an innovative technique known as heat assisted magnetic recording (HAMR) allowed data to be written to much smaller spaces. This provided a major acceleration in storage capacities during the 2020s, creating a new paradigm for HDD with successive generations now able to jump in larger steps of 4TB, 6TB, or even 10TB at a time.
Although data densities continued to grow rapidly, transfer speeds became an increasingly important consideration. Vast storage volumes needed to be accessible at rates commensurate with their size. To boost the IOPS per TB performance of HDDs, multi-actuator drives began to emerge. For example, using two actuators instead of one could almost double throughput.
With 50TB consumer-level HDDs on sale in 2026, hard drive makers continued to innovate and find ways of making data both smaller and faster, driven by the world’s ever-growing demand for storage. By 2030, consumer PC users have access to 100TB HDDs, quintupling the figure of a decade earlier.*
Completion of Saudi Vision 2030
This year sees the realisation of a long-term strategic framework by Saudi Arabia, intended to reduce the country’s dependence on oil, diversify its economy and develop public service sectors such as education, health, infrastructure, recreation, and tourism. The key goals of “Saudi Vision 2030” include reinforcing economic and investment activities, increasing non-oil international trade, and promoting a softer and more secular image of the Kingdom.
Crown Prince Mohammad bin Salman first announced the vision in 2016. The Saudi Council of Economic and Development Affairs (CEDA) then began identifying and monitoring the steps crucial for implementation by 2030. The CEDA established 13 programs, called Vision Realisation Programs, covering the different areas such as energy, finance, housing, quality of life, and transport.
The Kingdom’s goals for 2030* included:
• To move Saudi Arabia from the 19th largest economy in the world into the top 15
• To increase non-oil government revenue from SAR 163 billion (US$43.5bn) to at least SAR 1 trillion (US$267bn)
• To increase women’s participation in the workforce from 22% to 30%
• To increase small and medium-sized enterprise (SME) contribution to GDP from 20% to 35%
• To increase the private sector’s contribution from 40% to 65% of GDP
• To increase household spending on cultural and entertainment activities inside the Kingdom from the current level of 2.9% to 6%
• To increase the ratio of individuals exercising at least once a week from 13% of the population to 40%
• To increase the average life expectancy from 74 years to 80 years
• To have three Saudi cities be recognised in the top 1% of cities in the world
• To more than double the number of Saudi heritage sites registered with UNESCO
Alongside these socioeconomic measures, proposals for various large-scale projects began to emerge. The developers intended to both improve the country’s domestic transport and infrastructure, and to showcase Saudi Arabia to the world as a destination for leisure and investment. They included new retail, hotel, entertainment, cultural and residential megaprojects, as well as industrial, logistics, and corporate facilities.
By far the costliest and most prominent took the form of a $500bn smart city in the northwestern corner of the country. Known as Neom – a portmanteau of the Greek word neos, meaning “new,” and mustaqbal, the Arabic word for “future” – this would operate independently from the existing governmental framework, with its own tax and labour laws and an autonomous judicial system. According to its developers, Neom would be “a hub for innovation where global business and emerging players can research, incubate and commercialise ground-breaking technologies to accelerate human progress.” In 2021, Saudi Crown Prince Mohammed bin Salman unveiled the first major development within the Neom zone, a planned city named “The Line”.
The Line (as its name suggested) would consist of a long, linear development stretching for over 170 km (105 miles). This huge, continuous urban belt would enable the Red Sea coastline to the west to be linked with mountains and upper valleys in the east. The developers intended for it to redefine the traditional layout of a city by emphasising a strong focus on nature, liveability, health and community connections.
The Line’s masterplan called for building “around nature, rather than over it” and specified large areas of land to be preserved for conservation. The need for cars and other vehicles would be eliminated, with all essential daily needs provided within a five-minute walk for every resident. The project would include a system of ultra-high-speed mass transit running its complete length, with businesses and communities also hyper-connected through a digital framework incorporating artificial intelligence (AI) and robotics. The AI would monitor the city, using predictive and data models to optimise daily life for citizens in various ways. The Line would be self-sufficient with locally grown food, powered by 100% clean energy, home to abundant parks and other green spaces, and with sustainable and regenerative practices adopted throughout the city.
The developers completed phase one of The Line by 2025. Following subsequent expansion, its population has reached over a million by 2030* and the city has now boosted Saudi Arabia’s economy by SAR 180 billion (US$48bn).*
In addition to advanced technologies, The Line boasts other features. Its location makes it favourable in terms of weather and climate conditions, being one of the few areas in Saudi Arabia to experience snowfall in winter, as well as benefitting from the ocean breeze and aquatic recreation opportunities. Temperatures are 10°C lower than the average for the Arabian Peninsula. As a further geographic advantage, it can also be reached by more than 40% of the world’s population in less than a four-hour flight, while 13% of global trade already flows through the Red Sea.
The Line serves as a model for future developments within the Neom zone, while also inspiring other large-scale infrastructure projects both in Saudi Arabia and around the world.
Cargo Sous Terrain becomes operational in Switzerland
The Cargo Sous Terrain is an underground, automated system of freight transport that becomes operational in Switzerland from 2030 onwards.* It is designed to mitigate the increasing problem of road traffic, which has grown by 45% in the region since the mid-2010s. This tube network, including the self-driving carts and transfer stations, is built at a cost of $3.4 billion and is privately financed. The entire project is powered by renewables.
An initial pilot tunnel is constructed 50 metres below ground, with a total length of 41 miles (66 km). This connects Zurich, the largest city in Switzerland, with logistics centres near Bern (the capital) in the west. The route includes four above-ground waystations that enable cargo transfers. The pilot tunnel is followed by an expanded network that links Zurich with Lucerne and eventually Geneva, spanning the entire width of the country.
The unmanned, automated vehicles are propelled by electromagnetic induction and run at 19 mph (30 km/hour), operating 24 hours a day. An additional monorail system for packages, in the roof of the tunnel, moves at twice this speed. The Cargo Sous Terrain allows goods to be delivered more efficiently, at more regular intervals, while cutting air and noise pollution, as well as reducing the burden of traffic on overground roads and freight trains.***
The entire ocean floor is mapped
While humans had long ago conquered the Earth’s land masses, the deep oceans lay mostly unexplored. In the early years of the 21st century, only 20% of the global ocean floor had been mapped in detail. Even the surfaces of the Moon, Mars and other planets were better understood. With data now becoming as important a commodity as oil, researchers set out to acquire knowledge of the remaining 80% and uncover a potential treasure trove of hidden information.
Seabed 2030 – a collaborative project between the Nippon Foundation of Japan and the General Bathymetric Chart of the Oceans (GEBCO) – aimed to bring together all available bathymetric data to produce a definitive map of the world ocean floor by 2030.
As part of the effort, fleets of automated vessels capable of trans-oceanic journeys would cover millions of square miles, carrying with them an array of sensors and other technology. These uncrewed ships, monitored by a control centre in Southampton, UK, would deploy tethered robots to inspect points of interest all the way down to the floor of the ocean, thousands of metres below the surface.
By 2030, the project is largely complete.** The maps provide a wealth of new information on the global seabed, revealing its geology in unprecedented detail and showing the location of ecological hotspots, as well as many shipwrecks, crashed planes, archaeological artefacts and other unique and interesting sites. Commercial applications include the inspection of pipelines, and surveying of bed conditions for telecoms cables, offshore wind farms and so on. However, concerns are raised over the potential impact of new undersea mining technology, the opportunities for which are now greatly increased.