Wireless Worries Overshadow Triumphs of RF Research - IEEE Spectrum

2022-10-02 04:31:57 By : Ms. Lucky Chen

The October 2022 issue of IEEE Spectrum is here!

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

A leading expert in radio-frequency dosimetry dissects distress over 5G—and the differences between exposure and dosage

Kenneth R. Foster has decades of experience researching radio frequency (RF) radiation and its effects on biological systems. And now he’s coauthored a recent survey on the subject with two other researchers—Marvin Ziskin and Quirino Balzano. Collectively, the three of them (all IEEE Life Fellows) have more than a century of experience on the subject.

The survey, published in February in the International Journal of Environmental Research and Public Health, looks at the last 75 years of research into RF exposure assessment and dosimetry. In it the coauthors detail how far the field has advanced and why they believe it to be a scientific success story.

IEEE Spectrum carried out its conversation with Foster, a professor emeritus at the University of Pennsylvania, by email. We wanted to find out more about why RF exposure assessment research has been such a success, what makes RF dosimetry so difficult, and why public worries about health and wireless radiation never seem to go away.

For those who aren’t familiar with the distinction, what’s the difference between exposure and dose?

Kenneth Foster: In the context of RF safety, exposure refers to the fields outside the body, while dose refers to energy absorbed within body tissues. Both are very important for a host of applications—medical treatments, occupational health, and safety studies for consumer electronics, for example.

“For a good review of 5G bioeffects studies, see [Ken] Karipidis’s article that found ‘no confirmed evidence that low-level RF fields above 6 gigahertz such as those used by the 5G network are hazardous to human health.’ ” —Kenneth R. Foster, University of Pennsylvania

In your opinion, is exposure assessment a solved problem?

Foster: Measuring RF fields in free space is not a problem. The real problem that arises in some situations is the highly variable nature of RF exposure. For example, a number of scientists are surveying levels of RF fields in the environment, to address the public’s health concerns. Not an easy task, given the multitude of RF sources in the environment and the rapid falloff of RF fields from any source. Accurately characterizing an individual’s exposure to RF fields is a real challenge, at least for the handful of scientists trying to do that.

When you and your coauthors wrote your IJERPH article, was your goal to point out the success of exposure-assessment research and the challenges of dosimetry? Foster: Our goal was to point out the remarkable progress over the years in exposure-assessment research, which has added a lot of clarity to studies on biological effects of RF fields and enabled major advances in medical technology.

By just how much has the instrumentation in these fields improved? Can you give me a sense of what tools you had available to you at the beginning of your career, for example, versus what’s available now? And how has improved instrumentation contributed to the success of exposure assessment?

Foster: The instrumentation for measurement of RF fields in health and safety studies has become smaller and more capable. Decades ago, who would have imagined that commercial field meters would be available that are rugged enough to take to a work site, able to measure RF fields strong enough to pose occupational hazards but also sensitive enough to measure weak fields from distant antennas? And at the same time, determine the precise spectrum of a signal to identify its source?

What about when wireless technologies move into new frequency bands—millimeter and terahertz waves for cellular, for example, or the 6-gigahertz band for Wi-Fi?

Foster: The problem again relates to the complexity of exposure situations, not instrumentation. For example, high-band 5G cellular base stations transmit multiple beams that move around in space. That makes it difficult to quantify exposure to people near cellular base stations, to verify that exposures are within safety limits (as they almost invariably are).

“I am personally more concerned about possible effects of excessive screen time on child development and privacy issues.” —Kenneth R. Foster, University of Pennsylvania

If exposure assessment is a solved problem, what makes the jump to accurate dosimetry so difficult? What makes the first so much simpler than the latter?

Foster: Dosimetry is much more challenging than exposure assessment. You generally cannot stick an RF probe into someone’s body. There are many reasons you might need that information, such as in hyperthermia treatments for cancer therapy, where tissue must be heated to precisely specified levels. Too little heating and there is no therapeutic benefit, too much and you burn the patient.

Can you tell me more about the ways in which dosimetry is done today? What’s the next best thing, if you can’t stick a probe into someone’s body?

Foster: For many purposes, using the good old RF meter to measure fields in air is okay. That is certainly the case with occupational-safety work, where you need to measure the RF fields incident on a worker’s body. For clinical hyperthermia, you may still need to skewer the patient with thermal probes but computational dosimetry greatly improves the accuracy of measuring thermal dose and has led to important advances in the technique. For RF bioeffects studies—for example, using antennas placed against an animal—it is crucial to know how much RF energy is absorbed in the body and where it goes. You can’t just wave a cellphone in front of the animal as the exposure source (but some investigators do just that). For some major studies, such as the recent National Toxicology Program study in rats exposed for their lifetimes to RF energy, there is no real alternative to computational dosimetry.

Why do you think there’s so much persistent worry about wireless radiation, to the extent people will measure the levels in their homes?

Foster: Risk perception is a complicated business. Wireless radiation has characteristics that tend to raise peoples’ concerns. You can’t see it, there is no immediate connection between exposure and the kinds of effects that some people worry about, people tend to confuse RF energy (which is nonionizing, meaning that its photons are too weak to break chemical bonds) with ionizing radiation such as X-rays (which are truly dangerous). Some people believe that they are “hypersensitive” to wireless radiation, despite the inability of scientists to demonstrate such sensitivity in properly blinded and controlled studies. Some people feel threatened by the immense number of antennas that are popping up everywhere for wireless communications. The scientific literature contains many reports of varying quality and relevance to health and one can fish through this literature and put together a frightening story. And a few scientists think that there really may be health problems (although health agencies find little to concern them but say that “more research” is needed).The list goes on.

Exposure assessment plays some role in this. Consumers can buy cheap but very sensitive RF detectors and survey their environments for RF signals, of which there are many. Some of these devices emit “clicks” when measuring RF pulses from devices such as Wi-Fi access points, and sound for all the world like a Geiger counter at a nuclear reactor. Frightening. Some RF meters are also sold for hunting ghosts, but that is a different application.

Last year, the British Medical Journal published a call to halt 5G rollouts until the technology’s safety could be determined. What do you make of these kinds of calls? Do you think they help inform the portion of the public that is concerned about the health effects of RF exposure, or cause more confusion? Foster: You refer to an opinion piece by [epidemiologist John] Frank, much of which I disagree with. Most health agencies that have reviewed the science simply call for more research, but at least one—the Health Council of the Netherlands—has called for a moratorium on rollout of high-band 5G until more safety studies are done. Such recommendations are surely concerning to the public (even though HCN also considered it unlikely that any health problems existed).

In his piece, Frank writes that “an emerging preponderance of laboratory studies indicating RF-EMFs’ [radiofrequency electromagnetic fields] disruptive biological effects.” Here is the problem: There are thousands of RF bioeffects studies in the literature that vary widely in endpoint, relevance to health, study quality, and exposure level. Most of them report some kind of effect, over all frequencies and at all exposure levels. However most of the studies have significant risk of bias (inadequate dosimetry, lack of blinding, small size, and so on) and many are inconsistent with other studies. The “emerging preponderance of studies” means little with respect to this murky literature. Frank should have relied on more careful reviews by health agencies. These consistently fail to find clear evidence for adverse effects of environmental RF fields.

Frank complains about inconsistencies in public discussion of “5G”—but he makes the same error, referring to 5G without reference to the frequency band. In fact, low and midband 5G operates at frequencies close to present cellular bands and would seem to present no new exposure issues. High-band 5G operates just below the millimeter-wave range which begins at 30 gigahertz. Fewer bioeffects studies have been done in that frequency range, but the energy hardly penetrates the skin and health agencies have not expressed concern about its safety at ordinary exposure levels.

Frank is not specific about what studies he wants done before rolling out “5G,” whatever he means by that. The [U.S. Federal Communications Commission] requires licensees to comply with its exposure limits, which are similar to those of most other countries. There is no precedent for requiring new RF technologies to be directly assessed for RF health effects before approval, which would require a potentially endless series of studies. If the FCC limits are unsafe they should be changed.

For a good review of 5G bio-effects studies see [Ken] Karipidis’s article that found “no confirmed evidence that low-level RF fields above 6 GHz such as those used by the 5G network are hazardous to human health.” The review also called for more research.

So that’s what’s needed at this time? More research?

The scientific literature is uneven, but so far health agencies have not found clear evidence for health hazards from environmental RF fields. But to be sure, the scientific literature on bioeffects of millimeter waves is relatively sparse, with maybe 100 studies, and very mixed in quality.

Governments have made a lot of money selling spectrum for 5G communications, and should invest some of that in high quality health studies, particularly for high-band 5G. I am personally more concerned about possible effects of excessive screen time on child development and privacy issues.

Are there ways in which dosimetry efforts are improving? If so, what are some of the most interesting or promising examples?

Foster: Probably the major advance has been in computational dosimetry, with the introduction of the finite difference time domain (FDTD) method and numerical models of the body based on high-resolution medical images. This allows very precise calculation of the absorption of RF energy in the body from any source. Computational dosimetry has given new life to established medical treatments such as hyperthermia for treatment of cancer, and has facilitated the development of improved MRI imaging systems and many other medical technologies.

Michael Koziol is an associate editor at IEEE Spectrum where he covers everything telecommunications. He graduated from Seattle University with bachelor's degrees in English and physics, and earned his master's degree in science journalism from New York University.

There’s plenty of bandwidth available if we use reconfigurable intelligent surfaces

Ground level in a typical urban canyon, shielded by tall buildings, will be inaccessible to some 6G frequencies. Deft placement of reconfigurable intelligent surfaces [yellow] will enable the signals to pervade these areas.

For all the tumultuous revolution in wireless technology over the past several decades, there have been a couple of constants. One is the overcrowding of radio bands, and the other is the move to escape that congestion by exploiting higher and higher frequencies. And today, as engineers roll out 5G and plan for 6G wireless, they find themselves at a crossroads: After years of designing superefficient transmitters and receivers, and of compensating for the signal losses at the end points of a radio channel, they’re beginning to realize that they are approaching the practical limits of transmitter and receiver efficiency. From now on, to get high performance as we go to higher frequencies, we will need to engineer the wireless channel itself. But how can we possibly engineer and control a wireless environment, which is determined by a host of factors, many of them random and therefore unpredictable?

Perhaps the most promising solution, right now, is to use reconfigurable intelligent surfaces. These are planar structures typically ranging in size from about 100 square centimeters to about 5 square meters or more, depending on the frequency and other factors. These surfaces use advanced substances called metamaterials to reflect and refract electromagnetic waves. Thin two-dimensional metamaterials, known as metasurfaces, can be designed to sense the local electromagnetic environment and tune the wave’s key properties, such as its amplitude, phase, and polarization, as the wave is reflected or refracted by the surface. So as the waves fall on such a surface, it can alter the incident waves’ direction so as to strengthen the channel. In fact, these metasurfaces can be programmed to make these changes dynamically, reconfiguring the signal in real time in response to changes in the wireless channel. Think of reconfigurable intelligent surfaces as the next evolution of the repeater concept.

Reconfigurable intelligent surfaces could play a big role in the coming integration of wireless and satellite networks.

That’s important, because as we move to higher frequencies, the propagation characteristics become more “hostile” to the signal. The wireless channel varies constantly depending on surrounding objects. At 5G and 6G frequencies, the wavelength is vanishingly small compared to the size of buildings, vehicles, hills, trees, and rain. Lower-frequency waves diffract around or through such obstacles, but higher-frequency signals are absorbed, reflected, or scattered. Basically, at these frequencies, the line-of-sight signal is about all you can count on.

Such problems help explain why the topic of reconfigurable intelligent surfaces (RIS) is one of the hottest in wireless research. The hype is justified. A landslide of R&D activity and results has gathered momentum over the last several years, set in motion by the development of the first digitally controlled metamaterials almost 10 years ago.

This article was jointly produced by IEEE Spectrum and Proceedings of the IEEE with similar versions published in both publications. For more on reconfigurable intelligent surfaces, those with access to IEEE Xplore can download a complete special issue on the topic.

RIS prototypes are showing great promise at scores of laboratories around the world. And yet one of the first major projects, the European-funded Visorsurf, began just five years ago and ran until 2020. The first public demonstrations of the technology occurred in late 2018, by NTT Docomo in Japan and Metawave, of Carlsbad, Calif.

Today, hundreds of researchers in Europe, Asia, and the United States are working on applying RIS to produce programmable and smart wireless environments. Vendors such as Huawei, Ericsson, NEC, Nokia, Samsung, and ZTE are working alone or in collaboration with universities. And major network operators, such as NTT Docomo, Orange, China Mobile, China Telecom, and BT are all carrying out substantial RIS trials or have plans to do so. This work has repeatedly demonstrated the ability of RIS to greatly strengthen signals in the most problematic bands of 5G and 6G.

To understand how RIS improves a signal, consider the electromagnetic environment. Traditional cellular networks consist of scattered base stations that are deployed on masts or towers, and on top of buildings and utility poles in urban areas. Objects in the path of a signal can block it, a problem that becomes especially bad at 5G’s higher frequencies, such as the millimeter-wave bands between 24.25 and 52.6 gigahertz. And it will only get worse if communication companies go ahead with plans to exploit subterahertz bands, between 90 and 300 GHz, in 6G networks. Here’s why. With 4G and similar lower-frequency bands, reflections from surfaces can actually strengthen the received signal, as reflected signals combine. However, as we move higher in frequencies, such multipath effects become much weaker or disappear entirely. The reason is that surfaces that appear smooth to a longer-wavelength signal are relatively rough to a shorter-wavelength signal. So rather than reflecting off such a surface, the signal simply scatters.

One solution is to use more powerful base stations or to install more of them throughout an area. But that strategy can double costs, or worse. Repeaters or relays can also improve coverage but here, too, the costs can be prohibitive. RIS, on the other hand, promises greatly improved coverage at just marginally higher cost

The key feature of RIS that makes it attractive in comparison with these alternatives is its nearly passive nature. The absence of amplifiers to boost the signal means that an RIS node can be powered with just a battery and a small solar panel.

RIS functions like a very sophisticated mirror, whose orientation and curvature can be adjusted in order to focus and redirect a signal in a specific direction. But rather than physically moving or reshaping the mirror, you electronically alter its surface so that it changes key properties of the incoming electromagnetic wave, such as the phase.

That’s what the metamaterials do. This emerging class of materials exhibits properties beyond (from the Greek meta) those of natural materials, such as anomalous reflection or refraction. The materials are fabricated using ordinary metals and electrical insulators, or dielectrics. As an electromagnetic wave impinges on a metamaterial, a predetermined gradient in the material alters the phase and other characteristics of the wave, making it possible to bend the wave front and redirect the beam as desired.

An RIS node is made up of hundreds or thousands of metamaterial elements called unit cells. Each cell consists of metallic and dielectric layers along with one or more switches or other tunable components. A typical structure includes an upper metallic patch with switches, a biasing layer, and a metallic ground layer separated by dielectric substrates. By controlling the biasing—the voltage between the metallic patch and the ground layer—you can switch each unit cell on or off and thus control how each cell alters the phase and other characteristics of an incident wave.

To control the direction of the larger wave reflecting off the entire RIS, you synchronize all the unit cells to create patterns of constructive and destructive interference in the larger reflected waves [ see illustration below]. This interference pattern reforms the incident beam and sends it in a particular direction determined by the pattern. This basic operating principle, by the way, is the same as that of a phased-array radar.

A reconfigurable intelligent surface comprises an array of unit cells. In each unit cell, a metamaterial alters the phase of an incoming radio wave, so that the resulting waves interfere with one another [above, top]. Precisely controlling the patterns of this constructive and destructive interference allows the reflected wave to be redirected [bottom], improving signal coverage.

An RIS has other useful features. Even without an amplifier, an RIS manages to provide substantial gain—about 30 to 40 decibels relative to isotropic (dBi)—depending on the size of the surface and the frequency. That’s because the gain of an antenna is proportional to the antenna’s aperture area. An RIS has the equivalent of many antenna elements covering a large aperture area, so it has higher gain than a conventional antenna does.

All the many unit cells in an RIS are controlled by a logic chip, such as a field-programmable gate array with a microcontroller, which also stores the many coding sequences needed to dynamically tune the RIS. The controller gives the appropriate instructions to the individual unit cells, setting their state. The most common coding scheme is simple binary coding, in which the controller toggles the switches of each unit cell on and off. The unit-cell switches are usually semiconductor devices, such as PIN diodes or field-effect transistors.

The important factors here are power consumption, speed, and flexibility, with the control circuit usually being one of the most power-hungry parts of an RIS. Reasonably efficient RIS implementations today have a total power consumption of around a few watts to a dozen watts during the switching state of reconfiguration, and much less in the idle state.

To deploy RIS nodes in a real-world network, researchers must first answer three questions: How many RIS nodes are needed? Where should they be placed? And how big should the surfaces be? As you might expect, there are complicated calculations and trade-offs.

Engineers can identify the best RIS positions by planning for them when the base station is designed. Or it can be done afterward by identifying, in the coverage map, the areas of poor signal strength. As for the size of the surfaces, that will depend on the frequencies (lower frequencies require larger surfaces) as well as the number of surfaces being deployed.

To optimize the network’s performance, researchers rely on simulations and measurements. At Huawei Sweden, where I work, we’ve had a lot of discussions about the best placement of RIS units in urban environments. We’re using a proprietary platform, called the Coffee Grinder Simulator, to simulate an RIS installation prior to its construction and deployment. We’re partnering with CNRS Research and CentraleSupélec, both in France, among others.

In a recent project, we used simulations to quantify the performance improvement gained when multiple RIS were deployed in a typical urban 5G network. As far as we know, this was the first large-scale, system-level attempt to gauge RIS performance in that setting. We optimized the RIS-augmented wireless coverage through the use of efficient deployment algorithms that we developed. Given the locations of the base stations and the users, the algorithms were designed to help us select the optimal three-dimensional locations and sizes of the RIS nodes from among thousands of possible positions on walls, roofs, corners, and so on. The output of the software is an RIS deployment map that maximizes the number of users able to receive a target signal.

An experimental reconfigurable intelligent surface with 2,304 unit cells was tested at Tsinghua University, in Beijing, last year.

Of course, the users of special interest are those at the edges of the cell-coverage area, who have the worst signal reception. Our results showed big improvements in coverage and data rates at the cell edges—and also for users with decent signal reception, especially in the millimeter band.

We also investigated how potential RIS hardware trade-offs affect performance. Simply put, every RIS design requires compromises—such as digitizing the responses of each unit cell into binary phases and amplitudes—in order to construct a less complex and cheaper RIS. But it’s important to know whether a design compromise will create additional beams to undesired directions or cause interference to other users. That’s why we studied the impact of network interference due to multiple base stations, reradiated waves by the RIS, and other factors.

Not surprisingly, our simulations confirmed that both larger RIS surfaces and larger numbers of them improved overall performance. But which is preferable? When we factored in the costs of the RIS nodes and the base stations, we found that in general a smaller number of larger RIS nodes, deployed further from a base station and its users to provide coverage to a larger area, was a particularly cost-effective solution.

The size and dimensions of the RIS depend on the operating frequency [see illustration below] . We found that a small number of rectangular RIS nodes, each around 4 meters wide for C-band frequencies (3.5 GHz) and around half a meter wide for millimeter-wave band (28 GHz), was a good compromise, and could boost performance significantly in both bands. This was a pleasant surprise: RIS improved signals not only in the millimeter-wave (5G high) band, where coverage problems can be especially acute, but also in the C band (5G mid).

To extend wireless coverage indoors, researchers in Asia are investigating a really intriguing possibility: covering room windows with transparent RIS nodes. Experiments at NTT Docomo and at Southeast and Nanjing universities, both in China, used smart films or smart glass. The films are fabricated from transparent conductive oxides (such as indium tin oxide), graphene, or silver nanowires and do not noticeably reduce light transmission. When the films are placed on windows, signals coming from outside can be refracted and boosted as they pass into a building, enhancing the coverage inside.

Planning and installing the RIS nodes is only part of the challenge. For an RIS node to work optimally, it needs to have a configuration, moment by moment, that is appropriate for the state of the communication channel in the instant the node is being used. The best configuration requires an accurate and instantaneous estimate of the channel. Technicians can come up with such an estimate by measuring the “channel impulse response” between the base station, the RIS, and the users. This response is measured using pilots, which are reference signals known beforehand by both the transmitter and the receiver. It’s a standard technique in wireless communications. Based on this estimation of the channel, it’s possible to calculate the phase shifts for each unit cell in the RIS.

The current approaches perform these calculations at the base station. However, that requires a huge number of pilots, because every unit cell needs its own phase configuration. There are various ideas for reducing this overhead, but so far none of them are really promising.

The total calculated configuration for all of the unit cells is fed to each RIS node through a wireless control link. So each RIS node needs a wireless receiver to periodically collect the instructions. This of course consumes power, and it also means that the RIS nodes are fully dependent on the base station, with unavoidable—and unaffordable—overhead and the need for continuous control. As a result, the whole system requires a flawless and complex orchestration of base stations and multiple RIS nodes via the wireless-control channels.

We need a better way. Recall that the “I” in RIS stands for intelligent. The word suggests real-time, dynamic control of the surface from within the node itself—the ability to learn, understand, and react to changes. We don’t have that now. Today’s RIS nodes cannot perceive, reason, or respond; they only execute remote orders from the base station. That’s why my colleagues and I at Huawei have started working on a project we call Autonomous RIS (AutoRIS). The goal is to enable the RIS nodes to autonomously control and configure the phase shifts of their unit cells. That will largely eliminate the base-station-based control and the massive signaling that either limit the data-rate gains from using RIS, or require synchronization and additional power consumption at the nodes. The success of AutoRIS might very well help determine whether RIS will ever be deployed commercially on a large scale.

Of course, it’s a rather daunting challenge to integrate into an RIS node the necessary receiving and processing capabilities while keeping the node lightweight and low power. In fact, it will require a huge research effort. For RIS to be commercially competitive, it will have to preserve its low-power nature.

With that in mind, we are now exploring the integration of an ultralow-power AI chip in an RIS, as well as the use of extremely efficient machine-learning models to provide the intelligence. These smart models will be able to produce the output RIS configuration based on the received data about the channel, while at the same time classifying users according to their contracted services and their network operator. Integrating AI into the RIS will also enable other functions, such as dynamically predicting upcoming RIS configurations and grouping users by location or other behavioral characteristics that affect the RIS operation.

Intelligent, autonomous RIS won’t be necessary for all situations. For some areas, a static RIS, with occasional reconfiguration—perhaps a couple of times per day or less—will be entirely adequate. In fact, there will undoubtedly be a range of deployments from static to fully intelligent and autonomous. Success will depend on not just efficiency and high performance but also ease of integration into an existing network.

6G promises to unleash staggering amounts of bandwidth—but only if we can surmount a potentially ruinous range problem.

The real test case for RIS will be 6G. The coming generation of wireless is expected to embrace autonomous networks and smart environments with real-time, flexible, software-defined, and adaptive control. Compared with 5G, 6G is expected to provide much higher data rates, greater coverage, lower latency, more intelligence, and sensing services of much higher accuracy. At the same time, a key driver for 6G is sustainability—we’ll need more energy-efficient solutions to achieve the “net zero” emission targets that many network operators are striving for. RIS fits all of those imperatives.

Start with massive MIMO, which stands for multiple-input multiple-output. This foundational 5G technique uses multiple antennas packed into an array at both the transmitting and receiving ends of wireless channels, to send and receive many signals at once and thus dramatically boost network capacity. However, the desire for higher data rates in 6G will demand even more massive MIMO, which will require many more radio-frequency chains to work and will be power-hungry and costly to operate. An energy-efficient and less costly alternative will be to place multiple low-power RIS nodes between massive MIMO base stations and users as we have described in this article.

The millimeter-wave and subterahertz 6G bands promise to unleash staggering amounts of bandwidth, but only if we can surmount a potentially ruinous range problem without resorting to costly solutions, such as ultradense deployments of base stations or active repeaters. My opinion is that only RIS will be able to make these frequency bands commercially viable at a reasonable cost.

The communications industry is already touting sensing—high-accuracy localization services as well as object detection and posture recognition—as an important possible feature for 6G. Sensing would also enhance performance. For example, highly accurate localization of users will help steer wireless beams efficiently. Sensing could also be offered as a new network service to vertical industries such as smart factories and autonomous driving, where detection of people or cars could be used for mapping an environment; the same capability could be used for surveillance in a home-security system. The large aperture of RIS nodes and their resulting high resolution mean that such applications will be not only possible but probably even cost effective.

And the sky is not the limit. RIS could enable the integration of satellites into 6G networks. Typically, a satellite uses a lot of power and has large antennas to compensate for the long-distance propagation losses and for the modest capabilities of mobile devices on Earth. RIS could play a big role in minimizing those limitations and perhaps even allowing direct communication from satellite to 6G users. Such a scheme could lead to more efficient satellite-integrated 6G networks.

As it transitions into new services and vast new frequency regimes, wireless communications will soon enter a period of great promise and sobering challenges. Many technologies will be needed to usher in this next exciting phase. None will be more essential than reconfigurable intelligent surfaces.

The author wishes to acknowledge the help of Ulrik Imberg in the writing of this article.