Wednesday, November 6, 2024

A Space Walking Robot Could Build a Giant Telescope in Space

The Hubble Space Telescope was carried to space inside the space shuttle Discovery and then released into low-Earth orbit. The James Webb Space Telescope was squeezed inside the nose cone of an Ariane 5 rocket and then launched. It deployed its mirror and shade on its way to its home at the Sun-Earth L2 Lagrange point.

However, the ISS was assembled in space with components launched at different times. Could it be a model for building future space telescopes and other space facilities?

The Universe has a lot of dark corners that need to be peered into. That’s why we’re driven to build more powerful telescopes, which means larger mirrors. However, it becomes increasingly difficult to launch them into space inside rocket nose cones. Since we don’t have space shuttles anymore, this leads us to a natural conclusion: assemble our space telescopes in space using powerful robots.

New research in the journal Acta Astronautica examines the viability of using walking robots to build space telescopes.

The research is “The new era of walking manipulators in space: Feasibility and operational assessment of assembling a 25 m Large Aperture Space Telescope in orbit.” The lead author is Manu Nair from the Lincoln Centre for Autonomous Systems in the UK.

“This research is timely given the constant clamour for high-resolution astronomy and Earth observation within the space community and serves as a baseline for future missions with telescopes of much larger aperture, missions requiring assembly of space stations, and solar-power generation satellites, to list a few,” the authors write.

While the Canadarm and the European Robotic Arm on the ISS have proven capable and effective, they have limitations. They’re remotely operated by astronauts and have only limited walking abilities.

Recognizing the need for more capable space telescopes, space stations, and other infrastructure, Nair and his co-authors are developing a concept for an improved walking robot. “To address the limitations of conventional walking manipulators, this paper presents a novel seven-degrees-of-freedom dexterous End-Over-End Walking Robot (E-Walker) for future In-Space Assembly and Manufacturing (ISAM) missions,” they write.

An illustration of the E-walker. The robot has seven degrees of freedom, meaning it has seven independent motions. Image Credit: Mini Rai, University of Lincoln.
An illustration of the E-walker. The robot has seven degrees of freedom, meaning it has seven independent motions. Image Credit: Mini Rai, University of Lincoln.

Robotics, Automation, and Autonomous Systems (RAAS) will play a big role in the future of space telescopes and other infrastructure. These systems require dexterity, a high degree of autonomy, redundancy, and modularity. A lot of work remains to create RAAS that can operate in the harsh environment of space. The E-Walker is a concept that aims to fulfill some of these requirements.

The authors point out how robots are being used in unique industrial settings here on Earth. The Joint European Torus is being decommissioned, and a Boston Dynamics Spot quadruped robot is being used to test its effectiveness. It moved around the JET autonomously during a 35-day trial, mapping the facility and taking sensor readings, all while avoiding obstacles and personnel.

The Boston Dynamics Spot robot spent 35 days working autonomously on the Joint European Torus. Here, Spot is inspecting wires and pipes at the facility at Culham, near Oxford (Image Credit: UKAEA)
The Boston Dynamics Spot robot spent 35 days working autonomously on the Joint European Torus. Here, Spot is inspecting wires and pipes at the facility at Culham, near Oxford (Image Credit: UKAEA)

Using Spot during an industrial shutdown shows the potential of autonomous robots. However, robots still have a long way to go before they can build a space telescope. The authors’ case study could be an important initial step.

Their case study is the hypothetical LAST, a Large Aperture Space Telescope with a wide-field, 25-meter primary mirror that operates in visible light. LAST is the backdrop for the researchers’ feasibility study.

LAST’s primary mirror would be modular, and its piece would have connector ports and interfaces for construction and for data, power, and thermal transfer. This type of modularity would make it easier for autonomous systems to assemble the telescope.

LAST would build its mirror using Primary Mirror Units (PMUs). Nineteen PMUs make up a Primary Mirror Segment (PMS), and 18 PMSs would constitute LAST’s 25-meter primary mirror. A total of 342 PMUs would be needed to complete the telescope.

This figure shows how LAST would be constructed. 342 Primary Mirror Units make up the 18 Primary Mirror Segments, adding up to a 25-meter primary mirror. (b) shows how the center of each PMU is found, and (c) shows a PMU and its connectors. Image Credit: Nair et al. 2024.
This figure shows how LAST would be constructed. 342 Primary Mirror Units make up the 18 Primary Mirror Segments, adding up to a 25-meter primary mirror. (b) shows how the center of each PMU is found, and (c) shows a PMU and its connectors. Image Credit: Nair et al. 2024.

The E-Walker concept would also have two spacecraft: a Base Spacecraft (BSC) and a Storage Spacecraft (SSC). The BSC would act as a kind of mothership, sending required commands to the E-Walker, monitoring its operational state, and ensuring that things go smoothly. The SSC would hold all of the PMUs in a stacked arrangement, and the E-Walker would retrieve one at a time.

The researchers developed eleven different Concept of Operations (ConOps) for the LAST mission. Some of the ConOps included multiple E-walkers working cooperatively. The goals are to optimize task-sharing, prioritize ground-lifting mass, and simplify control and motion planning. “The above-mentioned eleven mission scenarios are studied further to choose the most feasible ConOps for the assembly of the 25m LAST,” they explain.

This figure summarizes the 11 mission ConOps developed for LAST. (a) shows assembly with a single E-walker, (b) shows partially shared responsibilities among the E-walkers, (c) shows equally shared responsibilities between E-walkers, and (d) shows assembly carried out in two separate units, which is the safer assembly option. Image Credit: Nair et al. 2024.

Advanced tools like robotics and AI will be mainstays in the future of space exploration. It’s almost impossible to imagine a future where they aren’t critical, especially as our goals become more complex. “The capability to assemble complex systems in orbit using one or more robots will be an absolute requirement for supporting a resilient future orbital ecosystem,” the authors write. “In the forthcoming decades, newer infrastructures in the Earth’s orbits, which are much more advanced than the International Space Station, are needed for in-orbit servicing, manufacturing, recycling, orbital warehouse, Space-based Solar Power (SBSP), and astronomical and Earth-observational stations.”

The authors point out that their work is based on some assumptions and theoretical models. The E-walker concept still needs a lot of work, but a prototype is being developed.

It’s likely that the E-walker or some similar system will eventually be used to build telescopes, space stations, and other infrastructure.

The post A Space Walking Robot Could Build a Giant Telescope in Space appeared first on Universe Today.



Tuesday, November 5, 2024

New Report Details What Happened to the Arecibo Observatory

In 1963, the Arecibo Observatory became operational on the island of Puerto Rico. Measuring 305 meters (~1000 ft) in diameter, Arecibo’s spherical reflector dish was the largest radio telescope in the world at the time – a record it maintained until 2016 with the construction of the Five-hundred-meter Aperture Spherical Telescope (FAST) in China. In December 2020, Arecibo’s reflector dish collapsed after some of its support cables snapped, leading the National Science Foundation (NSF) to decommission the Observatory.

Shortly thereafter, the NSF and the University of Central Florida launched investigations to determine what caused the collapse. After nearly four years, the Committee on Analysis of Causes of Failure and Collapse of the 305-Meter Telescope at the Arecibo Observatory released an official report that details their findings. According to the report, the collapse was due to weakened infrastructure caused by long-term zinc creep-induced failure in the telescope’s cable sockets and previous damage caused by Hurricane Maria.

The massive dish was originally called the Arecibo Ionospheric Observatory and was intended for ionospheric research in addition to radio astronomy. The former task was part of the Advance Research Projects Agency’s (ARPA) Defender Program, which aimed to develop ballistic missile defenses. By 1967, the NSF took over the administration of Arecibo, henceforth making it a civilian facility dedicated to astronomy research. By 1971, NASA signed a memo of understanding to share the costs of maintaining and upgrading the facility.

Radar images of 1991 VH and its satellite by Arecibo Observatory in 2008. Credit: NSF

During its many years of service, the Arecibo Observatory accomplished some amazing feats. This included the first-ever discovery of a binary pulsar in 1974, which led to the discovery team (Russell A. Hulse and Joseph H. Taylor) being awarded the Nobel Prize in physics in 1993. In 1985, the observatory discovered the binary asteroid 4337 Arecibo in the outer regions of the Main Asteroid Belt. In 1992, Arecibo discovered the first exoplanets, two rocky bodies roughly four times the mass of Earth around the pulsar PSR 1257+12. This was followed by the discovery of the first repeating Fast Radio Burst (FRB) in 2016.

The observatory was also responsible for sending the famous Arecibo Message, the most powerful broadcast ever beamed into space and humanity’s first true attempt at Messaging Extraterrestrial Intelligence (METI). The pictorial message, which was crafted by a group of Cornell University and Arecibo scientists, which included Frank Drake (creator of the Drake equation), famed science communicator and author Carl Sagan, Richard Isaacman, Linda May, and James C.G. Walker, was aimed at the globular star cluster M13.

According to the Committee report, the structural failure began in 2017 when Hurricane Maria hit the Observatory on September 20th, 2017:

“Maria subjected the Arecibo Telescope to winds of between 105 and 118 mph, with the source of this uncertainty in wind speed discussed below... Based on a review of available records, the winds of Hurricane Maria subjected the Arecibo Telescope’s cables to the highest structural stress they had ever endured since it opened in 1963.”

However, inspections conducted after the hurricane concluded that “no significant damage had jeopardized the Arecibo Telescope’s structural integrity.” Repairs were nonetheless ordered, but the report identified several issues that caused these repairs to be delayed for years. Even so, the investigation indicated that due to the misdirection of repairs “toward components and replacement of a main cable that ultimately never failed,” these would not have prevented the Observatory’s collapse regardless.

Aerial view of the damage to the Arecibo Observatory following the collapse of the of the telescope platform on December 1st, 2020. Credit: Deborah Martorell

Moreover, in August and November of 2020, an auxiliary and main cable suffered a structural failure, which led to the NSF announcing that they would decommission the telescope through a controlled demolition to avoid a catastrophic collapse. They also stated that the other facilities in the observatory would remain operational, such as the Ángel Ramos Foundation Visitor Center. Before that could occur, however, more support cables buckled on December 1st, 2020, causing the instrument platform to collapse into the dish.

This collapse also removed the tops of the support towers and partially damaged some of the Observatory’s other buildings. Mercifully, no one was hurt. According to the report, the Arecibo Telescope’s cable spelter sockets had degraded considerably, as indicated by the previous cable failures. They also explain that the collapse was triggered by “hidden outer wire failures,” which had already fractured due to shear stress from zinc creep (aka. zinc decay) in the telescope’s cable spelter sockets.

This issue was not identified in the post-Mariah inspection, leading to a lack of consideration of degradation mechanisms and an overestimation of the potential strength of the other cables. According to NSF statements issued in October 2022 and September 2023, the observatory will be remade into an education center known as Arecibo C3, focused on Ciencia (Science), Computación (Computing), and fostering Comunidad (Community). So, while the observatory’s long history of radio astronomy may have ended, it will carry on as a STEM research center, and its legacy will endure.

Further Reading: National Academies Press, Gizmodo

The post New Report Details What Happened to the Arecibo Observatory appeared first on Universe Today.



Habitable Worlds are Found in Safe Places

When we think of exoplanets that may be able to support life, we hone in on the habitable zone. A habitable zone is a region around a star where planets receive enough stellar energy to have liquid surface water. It’s a somewhat crude but helpful first step when examining thousands of exoplanets.

However, there’s a lot more to habitability than that.

In a dense stellar environment, planets in habitable zones have more than their host star to contend with. Stellar flybys and exploding supernovae can eject habitable zone exoplanets from their solar systems and even destroy their atmospheres or the planets themselves.

New research examines the threats facing the habitable zone planets in our stellar neighbourhood. The study is “The 10 pc Neighborhood of Habitable Zone Exoplanetary Systems: Threat Assessment from Stellar Encounters & Supernovae,” and it has been accepted for publication in The Astronomical Journal. The lead author is Tisyagupta Pyne from the Integrated Science Education And Research Centre at Visva-Bharati University in India.

The researchers examined the 10-parsec regions around the 84 solar systems with habitable zone exoplanets. Some of these Habitable Zone Systems (HZS) face risks from stars outside of the solar systems. How do these risks affect their habitability? What does it mean for our notion of the habitable zone?

“Among the 4,500+ exoplanet-hosting stars, about 140+ are known to host planets in their habitable zones,” the authors write. “We assess the possible risks that local stellar environment of these HZS pose to their habitability.”

This image from the research shows the sky positions of exoplanet-hosting stars projected on a Molleweide map. HZS are denoted by yellow-green circles, while the remaining population of exoplanets is represented by gray circles. The studied sample of 84 HZS, located within 220 pc of the Sun, is represented by crossed yellow-green circles. The three high-density HZS located near the galactic plane are labeled 1, 2 and 3 in white. The colour bar represents the stellar density, i.e., the number of stars having more than 15 stars within a radius of 5 arc mins. Image Credit: Pyne et al. 2024.
This image from the research shows the sky positions of exoplanet-hosting stars projected on a Molleweide map. HZS are denoted by yellow-green circles, while the remaining population of exoplanets is represented by gray circles. The studied sample of 84 HZS, located within 220 pc of the Sun, is represented by crossed yellow-green circles. The three high-density HZS located near the galactic plane are labeled 1, 2 and 3 in white. The colour bar represents the stellar density, i.e., the number of stars having more than 15 stars within a radius of 5 arc mins. Image Credit: Pyne et al. 2024.

We have more than 150 confirmed exoplanets in habitable zones, and as exoplanet science advances, scientists are developing a more detailed understanding of what habitable zone means. Scientists increasingly use the terms conservative habitable zone and optimistic habitable zone.

The optimistic habitable zone is defined as regions that receive less radiation from their star than Venus received one billion years ago and more than Mars did 3.8 billion years ago. Scientists think that recent Venus (RV) and early Mars (EM) both likely had surface water.

The conservative habitable zone is a more stringent definition. It’s a narrower region around a star where an exoplanet could have surface water. It’s defined by an inner runaway greenhouse edge where stellar flux would vaporize surface water and an outer maximum greenhouse edge where the greenhouse effect of carbon dioxide is dominated by Rayleigh scattering.

Those are useful scientific definitions as far as they go. But what about habitable stellar environments? In recent years, scientists have learned a lot about how stars behave, the characteristics of exoplanets, and how the two are intertwined.

“The discovery of numerous extrasolar planets has revealed a diverse array of stellar and planetary characteristics, making systematic comparisons crucial for evaluating habitability and assessing the potential for life beyond our solar system,” the authors write.

To make these necessary systematic comparisons, the researchers developed two metrics: the Solar Similarity Index (SSI) and the Neighborhood Similarity Index (NSI). Since main sequence stars like our Sun are conducive to habitability, the SSI compares our Solar System’s properties with those of other HZs. The NSI compares the properties of stars in a 10-parsec region around the Sun to the same size region around other HZSs.

This research is mostly based on data from the ESA's Gaia spacecraft, which is building a map of the Milky Way by measuring one billion stars. But the further away an HZS is, or the dimmer the stars are, the more likely Gaia may not have detected every star, which affects the research's results. This image shows Gaia's data completeness. The colour scale indicates the faintest G magnitude at which the 95% completeness threshold is achieved. "Our sample of 84 HZS (green circles) has been overlaid on the map to visually depict the completeness of their respective neighbourhoods," the authors write. Image Credit: Pyne et al. 2024.
This research is mostly based on data from the ESA’s Gaia spacecraft, which is building a map of the Milky Way by measuring one billion stars. But the further away an HZS is, or the dimmer the stars are, the more likely Gaia may not have detected every star, which affects the research’s results. This image shows Gaia’s data completeness. The colour scale indicates the faintest G magnitude at which the 95% completeness threshold is achieved. “Our sample of 84 HZS (green circles) has been overlaid on the map to visually depict the completeness of their respective neighbourhoods,” the authors write. Image Credit: Pyne et al. 2024.

These indices put habitable zones in a larger context.

“While the concept of HZ is vital in the search for habitable worlds, the stellar environment of the planet also plays an important role in determining longevity and maintenance of habitability,” the authors write. “Studies have shown that a high rate of catastrophic events, such as supernovae and close stellar encounters in regions of high stellar density, is not conducive to the evolution of complex life forms and the maintenance of habitability over long periods.”

When radiation and high-energy particles from a distant source reach a planet in a habitable zone, they can cause severe damage to Earth-like planets. Supernovae are a dangerous source of radiation and particles, and if one were to explode close enough to Earth, that would be the end of life. Scientists know that ancient supernovae have left their mark on Earth, but none of them were close enough to destroy the atmosphere.

“Our primary focus is to investigate the effects of SNe on the atmospheres of exoplanets or exomoons assuming their atmospheres to be Earth-like,” the authors write.

The first factor is stellar density. The more stars in a neighbourhood, the greater the likelihood of supernova explosions and stellar flybys.

“The astrophysical impacts of the stellar environment is a “low-probability, high-consequence” scenario
for the continuation of habitability of exoplanets,” the authors write. Though disruptive events like supernova explosions or close stellar flybys are unlikely, the consequences can be so severe that habitability is completely eliminated.

When it came to the supernova threat, the researchers looked at high-mass stars in stellar neighbourhoods since only massive stars explode. Pyne and her colleagues found high-mass stars with more than eight solar masses in the 10-parsec neighbourhoods of two HZS: TOI-1227 and HD 48265. “These high-mass stars are potential progenitors for supernova explosions,” the authors explain.

Only one of the HZS is at risk of a stellar flyby. HD 165155 has an encounter rate of ?1 in 5 Gyr period. That means it’s at greater risk of an encounter with another star that could eject planets from its habitable zone.

The team’s pair of indices, the SSI and the NSI, produced divergent results. “… we find that the stellar environments of the majority of HZS exhibit a high degree of similarity (NSI> 0.75) to the solar neighbourhood,” they explain. However, because of the wide variety of stars in HZS, comparing them to the Sun results in a wide range of SSI values.

We know the danger supernova explosions pose to habitability. The initial burst of radiation could kill anything on the surface of a planet too close. The ongoing radiation could strip away the atmospheres of some planets further away and could also cause DNA damage in any lifeforms exposed to it. For planets that are further away from the blast, the supernova could alter their climate and trigger extinctions. There’s no absolutely certain understanding of how far away a planet needs to avoid devastation, but many scientists say that within 50 light-years, a planet is probably toast.

We can see the results of some of the stellar flybys the authors are considering. Rogue planets, or free-floating planets (FPPs), are likely in their hapless situations precisely because a stellar interloper got too close to their HZS and disrupted the gravitational relationships between the planets and their stars. We don’t know how many of these FPPs are in the Milky Way, but there could be many billions of them. Future telescopes like the Nancy Grace Roman Space Telescope will help us understand how many there truly are.

An artist's illustration of a rogue planet, dark and mysterious. Image Credit: NASA
An artist’s illustration of a rogue planet, dark and mysterious. Image Credit: NASA

Habitability may be fleeting, and our planet may be the exception. It’s possible that life appears on many planets in habitable zones but can’t last long due to various factors. From a great distance away, we can’t detect all the variables that go into exoplanet habitability.

However, we can gain an understanding of the stellar environments in which potentially habitable exoplanets exist, and this research shows us how.

The post Habitable Worlds are Found in Safe Places appeared first on Universe Today.



New Glenn Booster Moves to Launch Complex 36

Nine years ago, Blue Origin revealed the plans for their New Glenn rocket, a heavy-lift vehicle with a reusable first stage that would compete with SpaceX for orbital flights. Since that time, SpaceX has launched hundreds of rockets, while Blue Origin has been working mostly in secret on New Glenn. Last week, the company rolled out the first prototype of the first-stage booster to the launch complex at Cape Canaveral Space Force Station. If all goes well, we could see a late November test on the launch pad.

The test will be an integrated launch vehicle hot-fire which will include the second stage and a stacked payload.

Images posted on social media by Blue Origin CEO Dave Limp showed the 57-meter (188-foot)-long first stage with its seven BE-4 engines as it was transported from the production facility in Merritt Island, Florida — next to the Kennedy Space Center — to Launch Complex 36 at Cape Canaveral. Limp said that it was a 23-mile, multiple-hour journey “because we have to take the long way around.” The booster was carried by Blue Origin’s trailers called GERT (Giant Enormous Rocket Truck).

“Our transporter comprises two trailers connected by cradles and a strongback assembly designed in-house,” said Limp on X. “There are 22 axles and 176 tires on this transport vehicle…The distance between GERT’s front bumper and the trailer’s rear is 310’, about the length of a football field.”

Limp said the next step is to put the first and second stages together on the launch pad for the fully integrated hot fire dress rehearsal. The second stage recently completed its own hot fire at the launch site.

An overhead view of the New Glenn booster heading to launch complex 36 at Cape Canaveral during the night of Oct. 30, 2024. Credit: Blue Origin/Dave Limp.

Hopefully the test will lead to Blue Origin’s first ever launch to orbit. While the New Glenn rocket has had its share of delays, it seems Blue Origin has also taken a slow, measured approach to prepare for its first launch. In February of this year, a boilerplate of the rocket was finally rolled onto the launch pad at Cape Canaveral for testing.  Then in May 2024, New Glenn was rolled out again for additional testing. Now, the fully integrated test in the next few weeks will perhaps lead to a launch by the end of the year.

New Glenn’s seven engines will give it more than 3.8 million pounds of thrust on liftoff. The goal is for New Glenn to reuse its first-stage booster and the seven engines powering it, with recovery on a barge located downrange off the coast of Florida in the Atlantic Ocean.

New Glenn boosters are designed for 25 flights.

Blue Origin says New Glenn will launch payloads into high-energy orbits. It can carry more than 13 metric tons to geostationary transfer orbit (GTO) and 45 metric tons to low Earth orbit (LEO).

For the first flight, Blue Origin will be flying its own hardware as a payload, a satellite deployment technology called Blue Ring. Even though it doesn’t have a paying customer for the upcoming launch, it would be — if successful — the first of two required certification flights needed by the rocket by the U.S. Space Force so it could potentially be awarded future national security missions along with side SpaceX and United Launch Alliance (ULA.)

Additional details can be found at PhysOrg and NASASpaceflight.com.

The post New Glenn Booster Moves to Launch Complex 36 appeared first on Universe Today.



How Many Additional Exoplanets are in Known Systems?

One thing we’ve learned in recent decades is that exoplanets are surprisingly common. So far, we’ve confirmed nearly 6,000 planets, and we have evidence for thousands more. Most of these planets were discovered using the transit method. though we there are other methods as well. Many stars are known to have multiple planets, such as the TRAPPIST-1 system with seven Earth-sized worlds. But even within known planetary systems there could be planets we’ve overlooked. Perhaps their orbit doesn’t pass in front of the star from our vantage point, or the evidence of their presence is buried in data noise. How might we find them? A recent paper on the arXiv has an interesting approach.

Rather than combing through the observational data trying to extract more planets from the noise, the authors suggest that we look at the orbital dynamics of known systems to see if planets might be possible between the planets we know. Established systems are millions or billions of years old, so their planetary orbits must be stable on those timescales. If the planets of a system are “closely packed,” then adding new planets to the mix would cause the system to go all akilter. If the system is “loosely packed,” then we could add hypothetical planets between the others, and the system would still be dynamically stable.

The seven planetary systems considered. Credit: Horner, et al

To show how this would work, the authors consider seven planetary systems discovered by the Transiting Exoplanet Survey Satellite (TESS) known to have two planets. Since it isn’t likely that a system has only two planets, there is a good chance they have others. The team then ran thousands of simulations of these systems with hypothetical planets, calculating if they could remain stable over millions of years. They found that for two of the systems, extra planets (other than planets much more distant than the known ones) could be ruled out on dynamical grounds. Extra planets would almost certainly destabilize the systems. But five of the systems could remain stable with more planets. That doesn’t mean those systems have more planets, only that they could.

One of the things this work shows is that most of the currently known exoplanetary systems likely have yet-undiscovered worlds. This approach could also help us sort systems to determine which ones might deserve a further look. We are still in the early stages of discovery, and we are gathering data with incredible speed. We need tools like this so we aren’t overwhelmed by piles of new data.

Reference: Horner, Jonathan, et al. “The Search for the Inbetweeners: How packed are TESS planetary systems?arXiv preprint arXiv:2411.00245 (2024).

The post How Many Additional Exoplanets are in Known Systems? appeared first on Universe Today.



Hubble and Webb are the Dream Team. Don't Break Them Up

Many people think of the James Webb Space Telescope as a sort of Hubble 2. They understand that the Hubble Space Telescope (HST) has served us well but is now old, and overdue for replacement. NASA seems to agree, as they have not sent a maintenance mission in over fifteen years, and are already preparing to wind down operations. But a recent paper argues that this is a mistake. Despite its age, HST still performs extremely well and continues to produce an avalanche of valuable scientific results. And given that JWST was never designed as a replacement for HST — it is an infrared (IR) telescope) — we would best be served by operating both telescopes in tandem, to maximize coverage of all observations.

Let’s not fool ourselves: the Hubble Space Telescope (HST) is old, and is eventually going to fall back to Earth. Although it was designed to be repairable and upgradable, there have been no servicing missions since 2009. Those missions relied on the Space Shuttle, which could capture the telescope and provide a working base for astronauts. Servicing missions could last weeks, and only the Space Shuttle could transport the six astronauts to the telescope and house them for the duration of the mission.

Without those servicing missions, failing components can no longer be replaced, and the overall health of HST will keep declining. If nothing is done, HST will eventually stop working altogether. To avoid it becoming just another piece of space junk, plans are already being developed to de-orbit it and send it crashing into the Pacific Ocean. But that’s no reason to give up on it. It still has as clear a view of the cosmos as ever, and mission scientists are doing an excellent job of working around technical problems as they arise.

The James Webb Space Telescope was launched into space on Christmas dat in 2021. Its system of foldable hexagonal mirrors give it an effective diameter some 2.7 times larger than HST, and it is designed to see down into the mid-IR range. Within months of deployment, it had already seen things that clashed with existing models of how the Universe formed, creating a mini-crisis in some fields and leading unscrupulous news editors to write headlines questioning whether the “Big Bang Theory” was under threat!

This image of NASA’s Hubble Space Telescope was taken on May 19, 2009 after deployment during Servicing Mission 4. NASA

The reason JWST was able to capture such ancient galaxies is that it is primarily an IR telescope: As the Universe expands, photons from distant objects get red-shifted until stars that originally shone in visible light can now only be seen in the IR. But these IR views are proving extremely valuable in other scientific fields apart from cosmology. In fact, many of the most striking images released by JWST’s press team are IR images of familiar objects, revealing hidden complexities that had not been seen before.

This is a key difference between the two telescopes: While HST’s range overlaps slightly with JWST, it can see all the way up into ultraviolet (UV) wavelengths. HST was launched in 1990, seven years late and billions of dollars over budget. Its 2.4 meter primary element needed to be one of the most precisely ground mirrors ever made, because it was intended to be diffraction limited at UV wavelengths. Famously, avoidable problems in the testing process led to it being very precisely figured to a slightly wrong shape, leading to spherical aberration preventing it from coming to sharp focus.

Fortunately the telescope was designed from the start to be serviceable, and even returned to Earth for repairs by the Space Shuttle if necessary. In the end though, NASA opticians were able to design and build a set of corrective optics to solve the problem, and the COSTAR system was installed by astronauts on the first servicing mission. Over the years, NASA sent up three more servicing missions, to upgrade or repair components, and install new instruments.

Illustration of NASA's James Webb Space Telescope. Credits: NASA
Illustration of NASA’s James Webb Space Telescope. Credits: NASA

HST could be one of the most successful scientific instruments ever built. Since 1990, it has been the subject of approximately 1200 science press releases, each of which was read more than 400 million times. The more than 46,000 scientific papers written using HST data have been cited more than 900,000 times! And even in its current degraded state, it still provided data for 1435 papers in 2023 alone.

JWST also ran over time and over budget, but had a far more successful deployment. Despite having a much larger mirror, with more than six times the collecting area of HST, the entire observatory only weighs half as much as HST. Because of its greater sensitivity, and the fact that it can see ancient light redshifted into IR wavelengths, it can see far deeper into the Universe than HST. It is these observations, of galaxies formed when the Universe was extremely young (100 – 180 million years), that created such excitement shortly after it was deployed.

As valuable as these telescopes are, they will not last forever. JWST is located deep in space, some 1.5 million kilometers from Earth near the L2 Lagrange point. When it eventually fails, it will become just another piece of Solar System debris orbiting the Sun in the vast emptiness of the Solar System. HST, however, is in Low Earth Orbit (LEO), and suffers very slight amounts of drag from the faint outer reaches of the atmosphere. Over time it will gradually lose speed, drifting downwards until it enters the atmosphere proper and crashes to Earth. Because of its size, it will not burn up completely, and large chunks will smash into the surface.

Because it cannot be predicted where exactly it will re-enter, mission planners always intended to capture it with the Space Shuttle and return it to Earth before this happened. Its final resting place was supposed to be in display in a museum, but unfortunately the shuttle program was cancelled. The current plan is to send up an uncrewed rocket which will dock with the telescope (a special attachment was installed on the final servicing mission for this purpose), and deorbit it in a controlled way to ensure that its pieces land safely in the ocean.

You can find the original paper at https://arxiv.org/abs/2410.01187

The post Hubble and Webb are the Dream Team. Don't Break Them Up appeared first on Universe Today.



Monday, November 4, 2024

Scientists Have Figured out why Martian Soil is so Crusty

On November 26th, 2018, NASA’s Interior Exploration using Seismic Investigations, Geodesy, and Heat Transport (InSight) mission landed on Mars. This was a major milestone in Mars exploration since it was the first time a research station had been deployed to the surface to probe the planet’s interior. One of the most important instruments InSight would use to do this was the Heat Flow and Physical Properties Package (HP3) developed by the German Aerospace Center (DLR). Also known as the Martian Mole, this instrument measured the heat flow from deep inside the planet for four years.

The HP3 was designed to dig up to five meters (~16.5 ft) into the surface to sense heat deeper in Mars’ interior. Unfortunately, the Mole struggled to burrow itself and eventually got just beneath the surface, which was a surprise to scientists. Nevertheless, the Mole gathered considerable data on the daily and seasonal fluctuations below the surface. Analysis of this data by a team from the German Aerospace Center (DLR) has yielded new insight into why Martian soil is so “crusty.” According to their findings, temperatures in the top 40 cm (~16 inches) of the Martian surface lead to the formation of salt films that harden the soil.

The analysis was conducted by a team from the Microgravity User Support Center (MUSC) of the DLR Space Operations and Astronaut Training Institution in Cologne, which is responsible for overseeing the HP3 experiment. The heat data it obtained from the interior could be integral to understanding Mars’s geological evolution and addressing theories about its core region. At present, scientists suspect that geological activity on Mars largely ceased by the late Hesperian period (ca. 3 billion years ago), though there is evidence that lava still flows there today.

The “Mars Mole,” Heat Flow and Physical Properties Package (HP³). Credit: DLR

This was likely caused by Mars’ interior cooling faster due to its lower mass and lower pressure. Scientists theorize that this caused Mars’ outer core to solidify while its inner core became liquid—though this remains an open question. By comparing the subsurface temperatures obtained by InSight to surface temperatures, the DLR team could measure the rate of heat transport in the crust (thermal diffusivity) and thermal conductivity. From this, the density of the Martian soil could be estimated for the first time.

The team determined that the density of the uppermost 30 cm (~12 inches) of soil is comparable to basaltic sand – something that was not anticipated based on orbiter data. This material is common on Earth and is created by weathering volcanic rock rich in iron and magnesium. Beneath this layer, the soil density is comparable to consolidated sand and coarser basalt fragments. Tilman Spohn, the principal investigator of the HP3 experiment at the DLR Institute of Planetary Research, explained in a DLR press release:

“To get an idea of the mechanical properties of the soil, I like to compare it to floral foam, widely used in floristry for flower arrangements. It is a lightweight, highly porous material in which holes are created when plant stems are pressed into it... Over the course of seven Martian days, we measured thermal conductivity and temperature fluctuations at short intervals.

Additionally, we continuously measured the highest and lowest daily temperatures over the second Martian year. The average temperature over the depth of the 40-centimetre-long thermal probe was minus 56 degrees Celsius (217.5 Kelvin). These records, documenting the temperature curve over daily cycles and seasonal variations, were the first of their kind on Mars.”

NASA’s In­Sight space­craft land­ed in the Ely­si­um Plani­tia re­gion on Mars on 26 Novem­ber 2018. Credit: Credit: NASA-JPL/USGS/MOLA/DLR

Because the encrusted Martian soil (aka. “duricrust”) extends to a depth of 20 cm (~8 inches), the Mole managed to penetrate just a little more than 40 cm (~16 inches) – well short of its 5 m (~16.5 ft) objective. Nevertheless, the data obtained at this depth has provided valuable insight into heat transport on Mars. Accordingly, the team found that ground temperatures fluctuated by only 5 to 7 °C (9 to 12.5 °F) during a Martian day, a tiny fraction of the fluctuations observed on the surface—110 to 130 °C (230 to 266 °F).

Seasonally, they noted temperature fluctuation of 13 °C (~23.5 °F) while remaining below the freezing point of water on Mars in the layers near the surface. This demonstrates that the Martian soil is an excellent insulator, significantly reducing the large temperature differences at shallow depths. This influences various physical properties in Martian soil, including elasticity, thermal conductivity, heat capacity, the movement of material within, and the speed at which seismic waves can pass through them.

“Temperature also has a strong influence on chemical reactions occurring in the soil, on the exchange with gas molecules in the atmosphere, and therefore also on potential biological processes regarding possible microbial life on Mars,” said Spohn. “These insights into the properties and strength of the Martian soil are also of particular interest for future human exploration of Mars.”

What was particularly interesting, though, is how the temperature fluctuations enable the formation of salty brines for ten hours a day (when there is sufficient moisture in the atmosphere) in winter and spring. Therefore, the solidification of this brine is the most likely explanation for the duricrust layer beneath the surface. This information could prove very useful as future missions explore Mars and attempt to probe beneath the surface to learn more about the Red Planet’s history.

Further Reading: DLR

The post Scientists Have Figured out why Martian Soil is so Crusty appeared first on Universe Today.