Wednesday, November 6, 2024

A Space Walking Robot Could Build a Giant Telescope in Space

The Hubble Space Telescope was carried to space inside the space shuttle Discovery and then released into low-Earth orbit. The James Webb Space Telescope was squeezed inside the nose cone of an Ariane 5 rocket and then launched. It deployed its mirror and shade on its way to its home at the Sun-Earth L2 Lagrange point.

However, the ISS was assembled in space with components launched at different times. Could it be a model for building future space telescopes and other space facilities?

The Universe has a lot of dark corners that need to be peered into. That’s why we’re driven to build more powerful telescopes, which means larger mirrors. However, it becomes increasingly difficult to launch them into space inside rocket nose cones. Since we don’t have space shuttles anymore, this leads us to a natural conclusion: assemble our space telescopes in space using powerful robots.

New research in the journal Acta Astronautica examines the viability of using walking robots to build space telescopes.

The research is “The new era of walking manipulators in space: Feasibility and operational assessment of assembling a 25 m Large Aperture Space Telescope in orbit.” The lead author is Manu Nair from the Lincoln Centre for Autonomous Systems in the UK.

“This research is timely given the constant clamour for high-resolution astronomy and Earth observation within the space community and serves as a baseline for future missions with telescopes of much larger aperture, missions requiring assembly of space stations, and solar-power generation satellites, to list a few,” the authors write.

While the Canadarm and the European Robotic Arm on the ISS have proven capable and effective, they have limitations. They’re remotely operated by astronauts and have only limited walking abilities.

Recognizing the need for more capable space telescopes, space stations, and other infrastructure, Nair and his co-authors are developing a concept for an improved walking robot. “To address the limitations of conventional walking manipulators, this paper presents a novel seven-degrees-of-freedom dexterous End-Over-End Walking Robot (E-Walker) for future In-Space Assembly and Manufacturing (ISAM) missions,” they write.

An illustration of the E-walker. The robot has seven degrees of freedom, meaning it has seven independent motions. Image Credit: Mini Rai, University of Lincoln.
An illustration of the E-walker. The robot has seven degrees of freedom, meaning it has seven independent motions. Image Credit: Mini Rai, University of Lincoln.

Robotics, Automation, and Autonomous Systems (RAAS) will play a big role in the future of space telescopes and other infrastructure. These systems require dexterity, a high degree of autonomy, redundancy, and modularity. A lot of work remains to create RAAS that can operate in the harsh environment of space. The E-Walker is a concept that aims to fulfill some of these requirements.

The authors point out how robots are being used in unique industrial settings here on Earth. The Joint European Torus is being decommissioned, and a Boston Dynamics Spot quadruped robot is being used to test its effectiveness. It moved around the JET autonomously during a 35-day trial, mapping the facility and taking sensor readings, all while avoiding obstacles and personnel.

The Boston Dynamics Spot robot spent 35 days working autonomously on the Joint European Torus. Here, Spot is inspecting wires and pipes at the facility at Culham, near Oxford (Image Credit: UKAEA)
The Boston Dynamics Spot robot spent 35 days working autonomously on the Joint European Torus. Here, Spot is inspecting wires and pipes at the facility at Culham, near Oxford (Image Credit: UKAEA)

Using Spot during an industrial shutdown shows the potential of autonomous robots. However, robots still have a long way to go before they can build a space telescope. The authors’ case study could be an important initial step.

Their case study is the hypothetical LAST, a Large Aperture Space Telescope with a wide-field, 25-meter primary mirror that operates in visible light. LAST is the backdrop for the researchers’ feasibility study.

LAST’s primary mirror would be modular, and its piece would have connector ports and interfaces for construction and for data, power, and thermal transfer. This type of modularity would make it easier for autonomous systems to assemble the telescope.

LAST would build its mirror using Primary Mirror Units (PMUs). Nineteen PMUs make up a Primary Mirror Segment (PMS), and 18 PMSs would constitute LAST’s 25-meter primary mirror. A total of 342 PMUs would be needed to complete the telescope.

This figure shows how LAST would be constructed. 342 Primary Mirror Units make up the 18 Primary Mirror Segments, adding up to a 25-meter primary mirror. (b) shows how the center of each PMU is found, and (c) shows a PMU and its connectors. Image Credit: Nair et al. 2024.
This figure shows how LAST would be constructed. 342 Primary Mirror Units make up the 18 Primary Mirror Segments, adding up to a 25-meter primary mirror. (b) shows how the center of each PMU is found, and (c) shows a PMU and its connectors. Image Credit: Nair et al. 2024.

The E-Walker concept would also have two spacecraft: a Base Spacecraft (BSC) and a Storage Spacecraft (SSC). The BSC would act as a kind of mothership, sending required commands to the E-Walker, monitoring its operational state, and ensuring that things go smoothly. The SSC would hold all of the PMUs in a stacked arrangement, and the E-Walker would retrieve one at a time.

The researchers developed eleven different Concept of Operations (ConOps) for the LAST mission. Some of the ConOps included multiple E-walkers working cooperatively. The goals are to optimize task-sharing, prioritize ground-lifting mass, and simplify control and motion planning. “The above-mentioned eleven mission scenarios are studied further to choose the most feasible ConOps for the assembly of the 25m LAST,” they explain.

This figure summarizes the 11 mission ConOps developed for LAST. (a) shows assembly with a single E-walker, (b) shows partially shared responsibilities among the E-walkers, (c) shows equally shared responsibilities between E-walkers, and (d) shows assembly carried out in two separate units, which is the safer assembly option. Image Credit: Nair et al. 2024.

Advanced tools like robotics and AI will be mainstays in the future of space exploration. It’s almost impossible to imagine a future where they aren’t critical, especially as our goals become more complex. “The capability to assemble complex systems in orbit using one or more robots will be an absolute requirement for supporting a resilient future orbital ecosystem,” the authors write. “In the forthcoming decades, newer infrastructures in the Earth’s orbits, which are much more advanced than the International Space Station, are needed for in-orbit servicing, manufacturing, recycling, orbital warehouse, Space-based Solar Power (SBSP), and astronomical and Earth-observational stations.”

The authors point out that their work is based on some assumptions and theoretical models. The E-walker concept still needs a lot of work, but a prototype is being developed.

It’s likely that the E-walker or some similar system will eventually be used to build telescopes, space stations, and other infrastructure.

The post A Space Walking Robot Could Build a Giant Telescope in Space appeared first on Universe Today.



Tuesday, November 5, 2024

New Report Details What Happened to the Arecibo Observatory

In 1963, the Arecibo Observatory became operational on the island of Puerto Rico. Measuring 305 meters (~1000 ft) in diameter, Arecibo’s spherical reflector dish was the largest radio telescope in the world at the time – a record it maintained until 2016 with the construction of the Five-hundred-meter Aperture Spherical Telescope (FAST) in China. In December 2020, Arecibo’s reflector dish collapsed after some of its support cables snapped, leading the National Science Foundation (NSF) to decommission the Observatory.

Shortly thereafter, the NSF and the University of Central Florida launched investigations to determine what caused the collapse. After nearly four years, the Committee on Analysis of Causes of Failure and Collapse of the 305-Meter Telescope at the Arecibo Observatory released an official report that details their findings. According to the report, the collapse was due to weakened infrastructure caused by long-term zinc creep-induced failure in the telescope’s cable sockets and previous damage caused by Hurricane Maria.

The massive dish was originally called the Arecibo Ionospheric Observatory and was intended for ionospheric research in addition to radio astronomy. The former task was part of the Advance Research Projects Agency’s (ARPA) Defender Program, which aimed to develop ballistic missile defenses. By 1967, the NSF took over the administration of Arecibo, henceforth making it a civilian facility dedicated to astronomy research. By 1971, NASA signed a memo of understanding to share the costs of maintaining and upgrading the facility.

Radar images of 1991 VH and its satellite by Arecibo Observatory in 2008. Credit: NSF

During its many years of service, the Arecibo Observatory accomplished some amazing feats. This included the first-ever discovery of a binary pulsar in 1974, which led to the discovery team (Russell A. Hulse and Joseph H. Taylor) being awarded the Nobel Prize in physics in 1993. In 1985, the observatory discovered the binary asteroid 4337 Arecibo in the outer regions of the Main Asteroid Belt. In 1992, Arecibo discovered the first exoplanets, two rocky bodies roughly four times the mass of Earth around the pulsar PSR 1257+12. This was followed by the discovery of the first repeating Fast Radio Burst (FRB) in 2016.

The observatory was also responsible for sending the famous Arecibo Message, the most powerful broadcast ever beamed into space and humanity’s first true attempt at Messaging Extraterrestrial Intelligence (METI). The pictorial message, which was crafted by a group of Cornell University and Arecibo scientists, which included Frank Drake (creator of the Drake equation), famed science communicator and author Carl Sagan, Richard Isaacman, Linda May, and James C.G. Walker, was aimed at the globular star cluster M13.

According to the Committee report, the structural failure began in 2017 when Hurricane Maria hit the Observatory on September 20th, 2017:

“Maria subjected the Arecibo Telescope to winds of between 105 and 118 mph, with the source of this uncertainty in wind speed discussed below... Based on a review of available records, the winds of Hurricane Maria subjected the Arecibo Telescope’s cables to the highest structural stress they had ever endured since it opened in 1963.”

However, inspections conducted after the hurricane concluded that “no significant damage had jeopardized the Arecibo Telescope’s structural integrity.” Repairs were nonetheless ordered, but the report identified several issues that caused these repairs to be delayed for years. Even so, the investigation indicated that due to the misdirection of repairs “toward components and replacement of a main cable that ultimately never failed,” these would not have prevented the Observatory’s collapse regardless.

Aerial view of the damage to the Arecibo Observatory following the collapse of the of the telescope platform on December 1st, 2020. Credit: Deborah Martorell

Moreover, in August and November of 2020, an auxiliary and main cable suffered a structural failure, which led to the NSF announcing that they would decommission the telescope through a controlled demolition to avoid a catastrophic collapse. They also stated that the other facilities in the observatory would remain operational, such as the Ángel Ramos Foundation Visitor Center. Before that could occur, however, more support cables buckled on December 1st, 2020, causing the instrument platform to collapse into the dish.

This collapse also removed the tops of the support towers and partially damaged some of the Observatory’s other buildings. Mercifully, no one was hurt. According to the report, the Arecibo Telescope’s cable spelter sockets had degraded considerably, as indicated by the previous cable failures. They also explain that the collapse was triggered by “hidden outer wire failures,” which had already fractured due to shear stress from zinc creep (aka. zinc decay) in the telescope’s cable spelter sockets.

This issue was not identified in the post-Mariah inspection, leading to a lack of consideration of degradation mechanisms and an overestimation of the potential strength of the other cables. According to NSF statements issued in October 2022 and September 2023, the observatory will be remade into an education center known as Arecibo C3, focused on Ciencia (Science), Computación (Computing), and fostering Comunidad (Community). So, while the observatory’s long history of radio astronomy may have ended, it will carry on as a STEM research center, and its legacy will endure.

Further Reading: National Academies Press, Gizmodo

The post New Report Details What Happened to the Arecibo Observatory appeared first on Universe Today.



Habitable Worlds are Found in Safe Places

When we think of exoplanets that may be able to support life, we hone in on the habitable zone. A habitable zone is a region around a star where planets receive enough stellar energy to have liquid surface water. It’s a somewhat crude but helpful first step when examining thousands of exoplanets.

However, there’s a lot more to habitability than that.

In a dense stellar environment, planets in habitable zones have more than their host star to contend with. Stellar flybys and exploding supernovae can eject habitable zone exoplanets from their solar systems and even destroy their atmospheres or the planets themselves.

New research examines the threats facing the habitable zone planets in our stellar neighbourhood. The study is “The 10 pc Neighborhood of Habitable Zone Exoplanetary Systems: Threat Assessment from Stellar Encounters & Supernovae,” and it has been accepted for publication in The Astronomical Journal. The lead author is Tisyagupta Pyne from the Integrated Science Education And Research Centre at Visva-Bharati University in India.

The researchers examined the 10-parsec regions around the 84 solar systems with habitable zone exoplanets. Some of these Habitable Zone Systems (HZS) face risks from stars outside of the solar systems. How do these risks affect their habitability? What does it mean for our notion of the habitable zone?

“Among the 4,500+ exoplanet-hosting stars, about 140+ are known to host planets in their habitable zones,” the authors write. “We assess the possible risks that local stellar environment of these HZS pose to their habitability.”

This image from the research shows the sky positions of exoplanet-hosting stars projected on a Molleweide map. HZS are denoted by yellow-green circles, while the remaining population of exoplanets is represented by gray circles. The studied sample of 84 HZS, located within 220 pc of the Sun, is represented by crossed yellow-green circles. The three high-density HZS located near the galactic plane are labeled 1, 2 and 3 in white. The colour bar represents the stellar density, i.e., the number of stars having more than 15 stars within a radius of 5 arc mins. Image Credit: Pyne et al. 2024.
This image from the research shows the sky positions of exoplanet-hosting stars projected on a Molleweide map. HZS are denoted by yellow-green circles, while the remaining population of exoplanets is represented by gray circles. The studied sample of 84 HZS, located within 220 pc of the Sun, is represented by crossed yellow-green circles. The three high-density HZS located near the galactic plane are labeled 1, 2 and 3 in white. The colour bar represents the stellar density, i.e., the number of stars having more than 15 stars within a radius of 5 arc mins. Image Credit: Pyne et al. 2024.

We have more than 150 confirmed exoplanets in habitable zones, and as exoplanet science advances, scientists are developing a more detailed understanding of what habitable zone means. Scientists increasingly use the terms conservative habitable zone and optimistic habitable zone.

The optimistic habitable zone is defined as regions that receive less radiation from their star than Venus received one billion years ago and more than Mars did 3.8 billion years ago. Scientists think that recent Venus (RV) and early Mars (EM) both likely had surface water.

The conservative habitable zone is a more stringent definition. It’s a narrower region around a star where an exoplanet could have surface water. It’s defined by an inner runaway greenhouse edge where stellar flux would vaporize surface water and an outer maximum greenhouse edge where the greenhouse effect of carbon dioxide is dominated by Rayleigh scattering.

Those are useful scientific definitions as far as they go. But what about habitable stellar environments? In recent years, scientists have learned a lot about how stars behave, the characteristics of exoplanets, and how the two are intertwined.

“The discovery of numerous extrasolar planets has revealed a diverse array of stellar and planetary characteristics, making systematic comparisons crucial for evaluating habitability and assessing the potential for life beyond our solar system,” the authors write.

To make these necessary systematic comparisons, the researchers developed two metrics: the Solar Similarity Index (SSI) and the Neighborhood Similarity Index (NSI). Since main sequence stars like our Sun are conducive to habitability, the SSI compares our Solar System’s properties with those of other HZs. The NSI compares the properties of stars in a 10-parsec region around the Sun to the same size region around other HZSs.

This research is mostly based on data from the ESA's Gaia spacecraft, which is building a map of the Milky Way by measuring one billion stars. But the further away an HZS is, or the dimmer the stars are, the more likely Gaia may not have detected every star, which affects the research's results. This image shows Gaia's data completeness. The colour scale indicates the faintest G magnitude at which the 95% completeness threshold is achieved. "Our sample of 84 HZS (green circles) has been overlaid on the map to visually depict the completeness of their respective neighbourhoods," the authors write. Image Credit: Pyne et al. 2024.
This research is mostly based on data from the ESA’s Gaia spacecraft, which is building a map of the Milky Way by measuring one billion stars. But the further away an HZS is, or the dimmer the stars are, the more likely Gaia may not have detected every star, which affects the research’s results. This image shows Gaia’s data completeness. The colour scale indicates the faintest G magnitude at which the 95% completeness threshold is achieved. “Our sample of 84 HZS (green circles) has been overlaid on the map to visually depict the completeness of their respective neighbourhoods,” the authors write. Image Credit: Pyne et al. 2024.

These indices put habitable zones in a larger context.

“While the concept of HZ is vital in the search for habitable worlds, the stellar environment of the planet also plays an important role in determining longevity and maintenance of habitability,” the authors write. “Studies have shown that a high rate of catastrophic events, such as supernovae and close stellar encounters in regions of high stellar density, is not conducive to the evolution of complex life forms and the maintenance of habitability over long periods.”

When radiation and high-energy particles from a distant source reach a planet in a habitable zone, they can cause severe damage to Earth-like planets. Supernovae are a dangerous source of radiation and particles, and if one were to explode close enough to Earth, that would be the end of life. Scientists know that ancient supernovae have left their mark on Earth, but none of them were close enough to destroy the atmosphere.

“Our primary focus is to investigate the effects of SNe on the atmospheres of exoplanets or exomoons assuming their atmospheres to be Earth-like,” the authors write.

The first factor is stellar density. The more stars in a neighbourhood, the greater the likelihood of supernova explosions and stellar flybys.

“The astrophysical impacts of the stellar environment is a “low-probability, high-consequence” scenario
for the continuation of habitability of exoplanets,” the authors write. Though disruptive events like supernova explosions or close stellar flybys are unlikely, the consequences can be so severe that habitability is completely eliminated.

When it came to the supernova threat, the researchers looked at high-mass stars in stellar neighbourhoods since only massive stars explode. Pyne and her colleagues found high-mass stars with more than eight solar masses in the 10-parsec neighbourhoods of two HZS: TOI-1227 and HD 48265. “These high-mass stars are potential progenitors for supernova explosions,” the authors explain.

Only one of the HZS is at risk of a stellar flyby. HD 165155 has an encounter rate of ?1 in 5 Gyr period. That means it’s at greater risk of an encounter with another star that could eject planets from its habitable zone.

The team’s pair of indices, the SSI and the NSI, produced divergent results. “… we find that the stellar environments of the majority of HZS exhibit a high degree of similarity (NSI> 0.75) to the solar neighbourhood,” they explain. However, because of the wide variety of stars in HZS, comparing them to the Sun results in a wide range of SSI values.

We know the danger supernova explosions pose to habitability. The initial burst of radiation could kill anything on the surface of a planet too close. The ongoing radiation could strip away the atmospheres of some planets further away and could also cause DNA damage in any lifeforms exposed to it. For planets that are further away from the blast, the supernova could alter their climate and trigger extinctions. There’s no absolutely certain understanding of how far away a planet needs to avoid devastation, but many scientists say that within 50 light-years, a planet is probably toast.

We can see the results of some of the stellar flybys the authors are considering. Rogue planets, or free-floating planets (FPPs), are likely in their hapless situations precisely because a stellar interloper got too close to their HZS and disrupted the gravitational relationships between the planets and their stars. We don’t know how many of these FPPs are in the Milky Way, but there could be many billions of them. Future telescopes like the Nancy Grace Roman Space Telescope will help us understand how many there truly are.

An artist's illustration of a rogue planet, dark and mysterious. Image Credit: NASA
An artist’s illustration of a rogue planet, dark and mysterious. Image Credit: NASA

Habitability may be fleeting, and our planet may be the exception. It’s possible that life appears on many planets in habitable zones but can’t last long due to various factors. From a great distance away, we can’t detect all the variables that go into exoplanet habitability.

However, we can gain an understanding of the stellar environments in which potentially habitable exoplanets exist, and this research shows us how.

The post Habitable Worlds are Found in Safe Places appeared first on Universe Today.



New Glenn Booster Moves to Launch Complex 36

Nine years ago, Blue Origin revealed the plans for their New Glenn rocket, a heavy-lift vehicle with a reusable first stage that would compete with SpaceX for orbital flights. Since that time, SpaceX has launched hundreds of rockets, while Blue Origin has been working mostly in secret on New Glenn. Last week, the company rolled out the first prototype of the first-stage booster to the launch complex at Cape Canaveral Space Force Station. If all goes well, we could see a late November test on the launch pad.

The test will be an integrated launch vehicle hot-fire which will include the second stage and a stacked payload.

Images posted on social media by Blue Origin CEO Dave Limp showed the 57-meter (188-foot)-long first stage with its seven BE-4 engines as it was transported from the production facility in Merritt Island, Florida — next to the Kennedy Space Center — to Launch Complex 36 at Cape Canaveral. Limp said that it was a 23-mile, multiple-hour journey “because we have to take the long way around.” The booster was carried by Blue Origin’s trailers called GERT (Giant Enormous Rocket Truck).

“Our transporter comprises two trailers connected by cradles and a strongback assembly designed in-house,” said Limp on X. “There are 22 axles and 176 tires on this transport vehicle…The distance between GERT’s front bumper and the trailer’s rear is 310’, about the length of a football field.”

Limp said the next step is to put the first and second stages together on the launch pad for the fully integrated hot fire dress rehearsal. The second stage recently completed its own hot fire at the launch site.

An overhead view of the New Glenn booster heading to launch complex 36 at Cape Canaveral during the night of Oct. 30, 2024. Credit: Blue Origin/Dave Limp.

Hopefully the test will lead to Blue Origin’s first ever launch to orbit. While the New Glenn rocket has had its share of delays, it seems Blue Origin has also taken a slow, measured approach to prepare for its first launch. In February of this year, a boilerplate of the rocket was finally rolled onto the launch pad at Cape Canaveral for testing.  Then in May 2024, New Glenn was rolled out again for additional testing. Now, the fully integrated test in the next few weeks will perhaps lead to a launch by the end of the year.

New Glenn’s seven engines will give it more than 3.8 million pounds of thrust on liftoff. The goal is for New Glenn to reuse its first-stage booster and the seven engines powering it, with recovery on a barge located downrange off the coast of Florida in the Atlantic Ocean.

New Glenn boosters are designed for 25 flights.

Blue Origin says New Glenn will launch payloads into high-energy orbits. It can carry more than 13 metric tons to geostationary transfer orbit (GTO) and 45 metric tons to low Earth orbit (LEO).

For the first flight, Blue Origin will be flying its own hardware as a payload, a satellite deployment technology called Blue Ring. Even though it doesn’t have a paying customer for the upcoming launch, it would be — if successful — the first of two required certification flights needed by the rocket by the U.S. Space Force so it could potentially be awarded future national security missions along with side SpaceX and United Launch Alliance (ULA.)

Additional details can be found at PhysOrg and NASASpaceflight.com.

The post New Glenn Booster Moves to Launch Complex 36 appeared first on Universe Today.



How Many Additional Exoplanets are in Known Systems?

One thing we’ve learned in recent decades is that exoplanets are surprisingly common. So far, we’ve confirmed nearly 6,000 planets, and we have evidence for thousands more. Most of these planets were discovered using the transit method. though we there are other methods as well. Many stars are known to have multiple planets, such as the TRAPPIST-1 system with seven Earth-sized worlds. But even within known planetary systems there could be planets we’ve overlooked. Perhaps their orbit doesn’t pass in front of the star from our vantage point, or the evidence of their presence is buried in data noise. How might we find them? A recent paper on the arXiv has an interesting approach.

Rather than combing through the observational data trying to extract more planets from the noise, the authors suggest that we look at the orbital dynamics of known systems to see if planets might be possible between the planets we know. Established systems are millions or billions of years old, so their planetary orbits must be stable on those timescales. If the planets of a system are “closely packed,” then adding new planets to the mix would cause the system to go all akilter. If the system is “loosely packed,” then we could add hypothetical planets between the others, and the system would still be dynamically stable.

The seven planetary systems considered. Credit: Horner, et al

To show how this would work, the authors consider seven planetary systems discovered by the Transiting Exoplanet Survey Satellite (TESS) known to have two planets. Since it isn’t likely that a system has only two planets, there is a good chance they have others. The team then ran thousands of simulations of these systems with hypothetical planets, calculating if they could remain stable over millions of years. They found that for two of the systems, extra planets (other than planets much more distant than the known ones) could be ruled out on dynamical grounds. Extra planets would almost certainly destabilize the systems. But five of the systems could remain stable with more planets. That doesn’t mean those systems have more planets, only that they could.

One of the things this work shows is that most of the currently known exoplanetary systems likely have yet-undiscovered worlds. This approach could also help us sort systems to determine which ones might deserve a further look. We are still in the early stages of discovery, and we are gathering data with incredible speed. We need tools like this so we aren’t overwhelmed by piles of new data.

Reference: Horner, Jonathan, et al. “The Search for the Inbetweeners: How packed are TESS planetary systems?arXiv preprint arXiv:2411.00245 (2024).

The post How Many Additional Exoplanets are in Known Systems? appeared first on Universe Today.



Hubble and Webb are the Dream Team. Don't Break Them Up

Many people think of the James Webb Space Telescope as a sort of Hubble 2. They understand that the Hubble Space Telescope (HST) has served us well but is now old, and overdue for replacement. NASA seems to agree, as they have not sent a maintenance mission in over fifteen years, and are already preparing to wind down operations. But a recent paper argues that this is a mistake. Despite its age, HST still performs extremely well and continues to produce an avalanche of valuable scientific results. And given that JWST was never designed as a replacement for HST — it is an infrared (IR) telescope) — we would best be served by operating both telescopes in tandem, to maximize coverage of all observations.

Let’s not fool ourselves: the Hubble Space Telescope (HST) is old, and is eventually going to fall back to Earth. Although it was designed to be repairable and upgradable, there have been no servicing missions since 2009. Those missions relied on the Space Shuttle, which could capture the telescope and provide a working base for astronauts. Servicing missions could last weeks, and only the Space Shuttle could transport the six astronauts to the telescope and house them for the duration of the mission.

Without those servicing missions, failing components can no longer be replaced, and the overall health of HST will keep declining. If nothing is done, HST will eventually stop working altogether. To avoid it becoming just another piece of space junk, plans are already being developed to de-orbit it and send it crashing into the Pacific Ocean. But that’s no reason to give up on it. It still has as clear a view of the cosmos as ever, and mission scientists are doing an excellent job of working around technical problems as they arise.

The James Webb Space Telescope was launched into space on Christmas dat in 2021. Its system of foldable hexagonal mirrors give it an effective diameter some 2.7 times larger than HST, and it is designed to see down into the mid-IR range. Within months of deployment, it had already seen things that clashed with existing models of how the Universe formed, creating a mini-crisis in some fields and leading unscrupulous news editors to write headlines questioning whether the “Big Bang Theory” was under threat!

This image of NASA’s Hubble Space Telescope was taken on May 19, 2009 after deployment during Servicing Mission 4. NASA

The reason JWST was able to capture such ancient galaxies is that it is primarily an IR telescope: As the Universe expands, photons from distant objects get red-shifted until stars that originally shone in visible light can now only be seen in the IR. But these IR views are proving extremely valuable in other scientific fields apart from cosmology. In fact, many of the most striking images released by JWST’s press team are IR images of familiar objects, revealing hidden complexities that had not been seen before.

This is a key difference between the two telescopes: While HST’s range overlaps slightly with JWST, it can see all the way up into ultraviolet (UV) wavelengths. HST was launched in 1990, seven years late and billions of dollars over budget. Its 2.4 meter primary element needed to be one of the most precisely ground mirrors ever made, because it was intended to be diffraction limited at UV wavelengths. Famously, avoidable problems in the testing process led to it being very precisely figured to a slightly wrong shape, leading to spherical aberration preventing it from coming to sharp focus.

Fortunately the telescope was designed from the start to be serviceable, and even returned to Earth for repairs by the Space Shuttle if necessary. In the end though, NASA opticians were able to design and build a set of corrective optics to solve the problem, and the COSTAR system was installed by astronauts on the first servicing mission. Over the years, NASA sent up three more servicing missions, to upgrade or repair components, and install new instruments.

Illustration of NASA's James Webb Space Telescope. Credits: NASA
Illustration of NASA’s James Webb Space Telescope. Credits: NASA

HST could be one of the most successful scientific instruments ever built. Since 1990, it has been the subject of approximately 1200 science press releases, each of which was read more than 400 million times. The more than 46,000 scientific papers written using HST data have been cited more than 900,000 times! And even in its current degraded state, it still provided data for 1435 papers in 2023 alone.

JWST also ran over time and over budget, but had a far more successful deployment. Despite having a much larger mirror, with more than six times the collecting area of HST, the entire observatory only weighs half as much as HST. Because of its greater sensitivity, and the fact that it can see ancient light redshifted into IR wavelengths, it can see far deeper into the Universe than HST. It is these observations, of galaxies formed when the Universe was extremely young (100 – 180 million years), that created such excitement shortly after it was deployed.

As valuable as these telescopes are, they will not last forever. JWST is located deep in space, some 1.5 million kilometers from Earth near the L2 Lagrange point. When it eventually fails, it will become just another piece of Solar System debris orbiting the Sun in the vast emptiness of the Solar System. HST, however, is in Low Earth Orbit (LEO), and suffers very slight amounts of drag from the faint outer reaches of the atmosphere. Over time it will gradually lose speed, drifting downwards until it enters the atmosphere proper and crashes to Earth. Because of its size, it will not burn up completely, and large chunks will smash into the surface.

Because it cannot be predicted where exactly it will re-enter, mission planners always intended to capture it with the Space Shuttle and return it to Earth before this happened. Its final resting place was supposed to be in display in a museum, but unfortunately the shuttle program was cancelled. The current plan is to send up an uncrewed rocket which will dock with the telescope (a special attachment was installed on the final servicing mission for this purpose), and deorbit it in a controlled way to ensure that its pieces land safely in the ocean.

You can find the original paper at https://arxiv.org/abs/2410.01187

The post Hubble and Webb are the Dream Team. Don't Break Them Up appeared first on Universe Today.



Monday, November 4, 2024

Scientists Have Figured out why Martian Soil is so Crusty

On November 26th, 2018, NASA’s Interior Exploration using Seismic Investigations, Geodesy, and Heat Transport (InSight) mission landed on Mars. This was a major milestone in Mars exploration since it was the first time a research station had been deployed to the surface to probe the planet’s interior. One of the most important instruments InSight would use to do this was the Heat Flow and Physical Properties Package (HP3) developed by the German Aerospace Center (DLR). Also known as the Martian Mole, this instrument measured the heat flow from deep inside the planet for four years.

The HP3 was designed to dig up to five meters (~16.5 ft) into the surface to sense heat deeper in Mars’ interior. Unfortunately, the Mole struggled to burrow itself and eventually got just beneath the surface, which was a surprise to scientists. Nevertheless, the Mole gathered considerable data on the daily and seasonal fluctuations below the surface. Analysis of this data by a team from the German Aerospace Center (DLR) has yielded new insight into why Martian soil is so “crusty.” According to their findings, temperatures in the top 40 cm (~16 inches) of the Martian surface lead to the formation of salt films that harden the soil.

The analysis was conducted by a team from the Microgravity User Support Center (MUSC) of the DLR Space Operations and Astronaut Training Institution in Cologne, which is responsible for overseeing the HP3 experiment. The heat data it obtained from the interior could be integral to understanding Mars’s geological evolution and addressing theories about its core region. At present, scientists suspect that geological activity on Mars largely ceased by the late Hesperian period (ca. 3 billion years ago), though there is evidence that lava still flows there today.

The “Mars Mole,” Heat Flow and Physical Properties Package (HP³). Credit: DLR

This was likely caused by Mars’ interior cooling faster due to its lower mass and lower pressure. Scientists theorize that this caused Mars’ outer core to solidify while its inner core became liquid—though this remains an open question. By comparing the subsurface temperatures obtained by InSight to surface temperatures, the DLR team could measure the rate of heat transport in the crust (thermal diffusivity) and thermal conductivity. From this, the density of the Martian soil could be estimated for the first time.

The team determined that the density of the uppermost 30 cm (~12 inches) of soil is comparable to basaltic sand – something that was not anticipated based on orbiter data. This material is common on Earth and is created by weathering volcanic rock rich in iron and magnesium. Beneath this layer, the soil density is comparable to consolidated sand and coarser basalt fragments. Tilman Spohn, the principal investigator of the HP3 experiment at the DLR Institute of Planetary Research, explained in a DLR press release:

“To get an idea of the mechanical properties of the soil, I like to compare it to floral foam, widely used in floristry for flower arrangements. It is a lightweight, highly porous material in which holes are created when plant stems are pressed into it... Over the course of seven Martian days, we measured thermal conductivity and temperature fluctuations at short intervals.

Additionally, we continuously measured the highest and lowest daily temperatures over the second Martian year. The average temperature over the depth of the 40-centimetre-long thermal probe was minus 56 degrees Celsius (217.5 Kelvin). These records, documenting the temperature curve over daily cycles and seasonal variations, were the first of their kind on Mars.”

NASA’s In­Sight space­craft land­ed in the Ely­si­um Plani­tia re­gion on Mars on 26 Novem­ber 2018. Credit: Credit: NASA-JPL/USGS/MOLA/DLR

Because the encrusted Martian soil (aka. “duricrust”) extends to a depth of 20 cm (~8 inches), the Mole managed to penetrate just a little more than 40 cm (~16 inches) – well short of its 5 m (~16.5 ft) objective. Nevertheless, the data obtained at this depth has provided valuable insight into heat transport on Mars. Accordingly, the team found that ground temperatures fluctuated by only 5 to 7 °C (9 to 12.5 °F) during a Martian day, a tiny fraction of the fluctuations observed on the surface—110 to 130 °C (230 to 266 °F).

Seasonally, they noted temperature fluctuation of 13 °C (~23.5 °F) while remaining below the freezing point of water on Mars in the layers near the surface. This demonstrates that the Martian soil is an excellent insulator, significantly reducing the large temperature differences at shallow depths. This influences various physical properties in Martian soil, including elasticity, thermal conductivity, heat capacity, the movement of material within, and the speed at which seismic waves can pass through them.

“Temperature also has a strong influence on chemical reactions occurring in the soil, on the exchange with gas molecules in the atmosphere, and therefore also on potential biological processes regarding possible microbial life on Mars,” said Spohn. “These insights into the properties and strength of the Martian soil are also of particular interest for future human exploration of Mars.”

What was particularly interesting, though, is how the temperature fluctuations enable the formation of salty brines for ten hours a day (when there is sufficient moisture in the atmosphere) in winter and spring. Therefore, the solidification of this brine is the most likely explanation for the duricrust layer beneath the surface. This information could prove very useful as future missions explore Mars and attempt to probe beneath the surface to learn more about the Red Planet’s history.

Further Reading: DLR

The post Scientists Have Figured out why Martian Soil is so Crusty appeared first on Universe Today.



Another Way to Extract Energy From Black Holes?

The gravitational field of a rotating black hole is powerful and strange. It is so powerful that it warps space and time back upon itself, and it is so strange that even simple concepts such as motion and rotation are turned on their heads. Understanding how these concepts play out is challenging, but they help astronomers understand how black holes generate such tremendous energy. Take, for example, the concept of frame dragging.

Black holes form when matter collapses to be so dense that spacetime encloses it within an event horizon. This means black holes aren’t physical objects in the way they are used to. They aren’t made of matter, but are rather a gravitational imprint of where matter was. The same is true for the gravitational collapse of rotating matter. When we talk about a rotating black hole, this doesn’t mean the event horizon is spinning like a top, it means that spacetime near the black hole is twisted into a gravitational echo of the once rotating matter. Which is where things get weird.

Suppose you were to drop a ball into a black hole. Not orbiting or rotating, just a simple drop straight down. Rather than falling in a straight line toward the black hole, the path of the ball will shift toward an orbital path as it falls, moving around the black hole ever faster as it gets closer. This effect is known as frame dragging. Part of the “rotation” of the black hole is transferred to the ball, even though the ball is in free fall. The closer the ball is to the black hole, the greater the effect.

M87 black hole
This view of the M87 supermassive black hole in polarized light highlights the signature of magnetic fields. (Credit: EHT Collaboration)

A recent paper on the arXiv shows how this effect can transfer energy from a black hole’s magnetic field to nearby matter. Black holes are often surrounded by an accretion disk of ionized gas and dust. As the material of the disk orbits the black hole, it can generate a powerful magnetic field, which can superheat the material. While most of the power generated by this magnetic field is caused by the orbital motion, frame dragging can add an extra kick.

Essentially, a black hole’s magnetic field is generated by the bulk motion of the accretion disk. But thanks to frame dragging, the inner portion of the disk moves a bit faster than it should, while the outer portion moves a bit slower. This relative motion between them means that ionized matter moves relative to the magnetic field, creating a kind of dynamo effect. Thanks to frame dragging, the black hole creates more electromagnetic energy than you’d expect. While this effect is small for stellar mass black holes, it is large enough for supermassive black holes that we might see the effect in quasars through gaps in their power spectrum.

Reference: Okamoto, Isao, Toshio Uchida, and Yoogeun Song. “Electromagnetic Energy Extraction in Kerr Black Holes through Frame-Dragging Magnetospheres.” arXiv preprint arXiv:2401.12684 (2024).

The post Another Way to Extract Energy From Black Holes? appeared first on Universe Today.



Sunday, November 3, 2024

Plastic Waste on our Beaches Now Visible from Space, Says New Study

According to the United Nations, the world produces about 430 million metric tons (267 U.S. tons) of plastic annually, two-thirds of which are only used for a short time and quickly become garbage. What’s more, plastics are the most harmful and persistent fraction of marine litter, accounting for at least 85% of total marine waste. This problem is easily recognizable due to the Great Pacific Garbage Patch and the amount of plastic waste that washes up on beaches and shores every year. Unless measures are taken to address this problem, the annual flow of plastic into the ocean could triple by 2040.

One way to address this problem is to improve the global tracking of plastic waste using Earth observation satellites. In a recent study, a team of Australian researchers developed a new method for spotting plastic rubbish on our beaches, which they successfully field-tested on a remote stretch of coastline. This satellite imagery tool distinguishes between sand, water, and plastics based on how they reflect light differently. It can detect plastics on shorelines from an altitude of more than 600 km (~375 mi) – higher than the International Space Station‘s (ISS) orbit.

The paper that describes their tool, “Beached Plastic Debris Index; a modern index for detecting plastics on beaches,” was recently published by the Marine Pollution Bulletin. The research team was led by Jenna Guffogg, a researcher at the Royal Melbourne Institute of Technology University (RMIT) and the Faculty of Geo-Information Science and Earth Observation (ITC) at the University of Twente. She was joined by multiple colleagues from both institutions. The study was part of Dr. Guffogg’s joint PhD research with the support of an Australian Government Research Training Program (RTP) scholarship.

Dr Jenna Guffogg said plastic on beaches can have severe impacts on wildlife and their habitats, just as it does in open waters. Credit: BPDI

According to current estimates, humans dump well over 10 million metric tons (11 million U.S. tons) of plastic waste into our oceans annually. Since plastic production continues to increase worldwide, these numbers are projected to increase dramatically. What ends up on our beaches can severely impact wildlife and marine habitats, just like the impact it has in open waters. If these plastics are not removed, they will inevitably fragment into micro and nano plastics, another major environmental hazard. Said Dr. Guffogg in a recent RMIT University press release:

“Plastics can be mistaken for food; larger animals become entangled, and smaller ones, like hermit crabs, become trapped inside items such as plastic containers. Remote island beaches have some of the highest recorded densities of plastics in the world, and we’re also seeing increasing volumes of plastics and derelict fishing gear on the remote shorelines of northern Australia.

“While the impacts of these ocean plastics on the environment, fishing and, tourism are well documented, methods for measuring the exact scale of the issue or targeting clean-up operations, sometimes most needed in remote locations, have been held back by technological limitations.”

Satellite technology is already used to track plastic garbage floating around the world’s oceans. This includes relatively small drifts containing thousands of plastic bottles, bags, and fishing nets, but also gigantic floating trash islands like the Great Pacific Garbage Patch. As of 2018, this garbage patch measured about 1.6 million km2 (620,000 mi2) and consisted of 45,000–129,000 metric tons (50,000–142,000 U.S. tons). However, the technology used to locate plastic waste in the ocean is largely ineffective at spotting plastic on beaches.

Geospatial scientists have found a way to detect plastic waste on remote beaches, bringing us closer to global monitoring options. Credit: RMIT

Much of the problem is that plastic can be mistaken for patches of sand when viewed from space. The Beached Plastic Debris Index (BPDI) developed by Dr. Guffogg and her colleagues circumvents this by employing a spectral index – a mathematical formula that analyzes patterns of reflected light. The BPDI is specially designed to map plastic debris in coastal areas using high-definition data from the WorldView-3 satellite, a commercial Earth observation satellite (owned by Maxar Technologies) that has been in operation since 2014.

Thanks to their efforts, scientists now have an effective way to monitor plastic on beaches, which could assist in clean-up operations. As part of the remote sensing team at RMIT, Dr. Guffogg and her colleagues have developed similar tools for monitoring forests and mapping bushfires from space. To validate the BPDI, the team field-tested it by placing 14 plastic targets on a beach in southern Gippsland, about 200 km (125 mi) southeast of Melbourne. Each target was made of a different type of plastic and measured two square meters (21.5 square feet) – smaller than the satellite’s pixel size of about three square meters.

The resulting images were compared to three other indices, two designed for detecting plastics on land and one for detecting plastics in aquatic settings. The BPDI outperformed all three as the others struggled to differentiate between plastics and sand or misclassified shadows and water as plastic. As study author Dr. Mariela Soto-Berelov explained, this makes the BPDI far more useful for environments where water and plastic-contaminated pixels are likely to coexist.  

“This is incredibly exciting, as up to now we have not had a tool for detecting plastics in coastal environments from space. The beauty of satellite imagery is that it can capture large and remote areas at regular intervals. Detection is a key step needed for understanding where plastic debris is accumulating and planning clean-up operations, which aligns with several Sustainable Development Goals, such as Protecting Seas and Oceans.”  

The next step is to test the BPDI tool in real-life scenarios, which will consist of the team partnering with various organizations dedicated to monitoring and addressing the plastic waste problem.

Further Reading: RMIT, Marine Pollution Bulletin

The post Plastic Waste on our Beaches Now Visible from Space, Says New Study appeared first on Universe Today.



Future Space Telescopes Could be Made From Thin Membranes, Unrolled in Space to Enormous Size

Space-based telescopes are remarkable. Their view isn’t obscured by the weather in our atmosphere, and so they can capture incredibly detailed images of the heavens. Unfortunately, they are quite limited in mirror size. As amazing as the James Webb Space Telescope is, its primary mirror is only 6.5 meters in diameter. Even then, the mirror had to have foldable components to fit into the launch rocket. In contrast, the Extremely Large Telescope currently under construction in northern Chile will have a mirror more than 39 meters across. If only we could launch such a large mirror into space! A new study looks at how that might be done.

As the study points out, when it comes to telescope mirrors, all you really need is a reflective surface. It doesn’t need to be coated onto a thick piece of glass, nor does it need a big, rigid support structure. All that is just needed to hold the shape of the mirror against its own weight. As far as starlight is concerned, the shiny surface is all that matters. So why not just use a thin sheet of reflective material? You could just roll it up and put it in your launch vehicle. We could, for example, easily launch a 40-meter roll of aluminum foil into space.

Of course, things aren’t quite that simple. You would still need to unroll your membrane telescope back into its proper shape. You would also need a detector to focus the image upon, and you’d need a way to keep that detector in the correct alignment with the broadsheet mirror. In principle, you could do that with a thin support structure, which wouldn’t add an excessive bulk to your telescope. But even if we assume all of those engineering problems could be solved, you’d still have a problem. Even in the vacuum of space, the shape of such a thin mirror would deform over time. Solving this problem is the main focus of this new paper.

Once launched into space and unfurled, the membrane mirror wouldn’t deform significantly. But to capture sharp images, the mirror would have to maintain focus on the order of visible light. When the Hubble was launched, its mirror shape was off by less than the thickness of a human hair, and it took correcting lenses and an entire shuttle mission to fix. Any shifts on that scale would render our membrane telescope useless. So the authors look to a well-used trick of astronomers known as adaptive optics.

How radiative adaptive optics might work. Credit: Rabien, et al

Adaptive optics is used on large ground-based telescopes as a way to correct for atmospheric distortion. Actuators behind the mirror distort the mirror’s shape in real time to counteract the twinkles of the atmosphere. Essentially, it makes the shape of the mirror imperfect to account for our imperfect view of the sky. A similar trick could be used for a membrane telescope, but if we had to launch a complex actuator system for the mirror, we might as well go back to launching rigid telescopes. But what if we simply use laser projection instead?

By shining a laser projection onto the mirror, we could alter its shape through radiative recoil. Since it is simply a thin membrane, the shape would be significant enough to create optical corrections, and it could be modified in real time to maintain the mirror’s focus. The authors call this technique radiative adaptive optics, and through a series of lab experiments have demonstrated that it could work.

Doing this in deep space is much more complicated than doing it in the lab, but the work shows the approach is worth exploring. Perhaps in the coming decades we might build an entire array of such telescopes, which would allow us to see details in the distant heavens we can now only imagine.

Reference: Rabien, S., et al. “Membrane space telescope: active surface control with radiative adaptive optics.” Space Telescopes and Instrumentation 2024: Optical, Infrared, and Millimeter Wave. Vol. 13092. SPIE, 2024.

The post Future Space Telescopes Could be Made From Thin Membranes, Unrolled in Space to Enormous Size appeared first on Universe Today.



Saturday, November 2, 2024

Voyager 1 is Forced to Rely on its Low Power Radio

Voyager 1 was launched waaaaaay back in 1977. I would have been 4 years old then! It’s an incredible achievement that technology that was built THAT long ago is still working. Yet here we are in 2024, Voyager 1 and 2 are getting older. Earlier this week, NASA had to turn off one of the radio transmitters on Voyager 1. This forced communication to rely upon the low-power radio. Alas technology around 50 years old does sometimes glitch and this was the result of a command to turn on a heater. The result was that Voyager 1 tripped into fault protection mode and switch communications! Oops. 

Voyager 1 is a NASA space probe launched on September 5, 1977, as part of the Voyager program to study the outer planets and beyond. Initially, Voyager 1’s mission focused on flybys of Jupiter and Saturn, capturing incredible images before traveling outward. In 2012, it became the first human-made object to enter interstellar space, crossing the heliopause—the boundary between the influence of the Sun and interstellar space. It now continues to  to send data back to Earth from over 22 billion km  away, helping scientists learn about the interstellar medium. There is also a “Golden Record” onboard which contains sounds and images of life on Earth, Voyager 1 serves as a time capsule, intended to articulate the story of our world to any alien civilizations that may encounter it.

Saturn taken from Voyager 2
The Ringed Planet Saturn

Just a few days ago on 24 October, NASA had to reconnect to Voyager 1 on its outward journey because one of its radio transmitters had been turned off! Alien intervention perhaps! Exciting though that would be, alas not. 

The transmitter seems to have been turned off as a result of one of the spacecraft fault protection systems. Any time there is an issue with onboard systems the computer will flip the systems into protection mode to protect any further damage. If the spacecraft draws too much power from the batteries, the same system will turn off less critical systems to conserve power. When the fault protection system kicks in, it’s then the job of engineers on the ground fixing the fault.

Artist rendition of Voyager 1 entering interstellar space. (Credit: NASA/JPL-Caltech)

There are challenges here though. Due to the immense distance to Voyager 1, now about 24 billion km away, any communications to or from takes almost 23 hours to arrive. A request for data for example means a delay of 46 hours before the request arrives and the data returned! Undaunted, the team sent commands to Voyager 1 on the 16 October to turn on a heater but, whilst the probe should have had enough power, the command triggered the system to turn off a radio transmitter to conserve power. This was discovered on 18 October when the Deep Space Network was no longer able to detect the usual ping from the spacecraft. 

The engineers correctly identified the likely cause of the problem and found Voyager pinging away on a different frequency using the alternate radio transmitte. This one hadn’t been used since the early 19080’s! With the fault identified, the team did not switch immediately back to the original transmitter just yet in case the fault triggered again. Instead,they are now working to understand the fault before switching back. 

Until then, Voyager 1 will continue to communicate with Earth using the lower power transmitter as it continues its exploration out into interstellar space. 

Source : After Pause, NASA’s Voyager 1 Communicating With Mission Team

The post Voyager 1 is Forced to Rely on its Low Power Radio appeared first on Universe Today.



Friday, November 1, 2024

China Trains Next Batch of Taikonauts

China has a fabulously rich history when it comes to space travel and was among the first to experiment in rocket technology. The invention of the rocket is often attributed to the Sung Dynasty (AD 960-1279.) Since then, China has been keen to develop and build its own space industry. The Chinese National Space Administration has already successfully landed probes on the Moon but is preparing for their first human landers. Chinese astronauts are sometimes known as taikonauts and CNSA has just confirmed their fourth batch of taikonauts are set for a lunar landing. 

The Chinese National Space Administration (CNSA) is China’s equivalent to NASA. It was founded in 1993 to oversee the country’s space aspirations. Amazing results have been achieved over the last twenty years including the landmark Chang’e lunar missions. In 2019 Chang’e-4 landed on the far side of the Moon, the first lunar lander to do so and in 2021 became the third country to land a rover on Mars. In 2021 the first modules for CNSA’s Tiangong space station were launched, it’s now operational and working with other space agencies, is working on a number of scientific research projects. 

China has announced that it successfully completed its latest selection process in May. The CNSA are striving to expand their team of taikonauts. Ten were chosen from all the applicants including 8 experienced space pilots and two payload specialists. The team will now begin their program of training in August covering over 200 subject areas designed to prepare them for future missions to the Moon and other Chinese space initiatives. 

The training covers an extensive range of skills It will include training for living and working in microgravity, to learn about physical and mental health in space and specialist training in extravehicular activities. They will also learn maintenance techniques for advanced spacecraft systems and in hands-on training for undertaking experiments in microgravity. 

On her 2007 mission aboard the International Space Station, NASA astronaut Peggy Whitson, Expedition 16 commander, worked on the Capillary Flow Experiment (CFE), which observes the flow of fluid, in particular capillary phenomena, in microgravity. Credits: NASA

The program is designed to expand and fine tune the skills of the taikonauts in preparation for future crewed lunar missions. Specialist training for lunar landings include piloting spacecraft under different gravitational conditions, manoeuvring lunar rovers, training in celestial navigation and stellar identification. 

Not only will they learn about space operations but they will have to learn skills to support scientific objectives too. This will include how to conduct geological surveys and how to operate tools and manoeuvre in the micro-gravitational environments. 

Source : China’s fourth batch of taikonauts set for lunar landings

The post China Trains Next Batch of Taikonauts appeared first on Universe Today.



NASA Focusses in on Artemis III Landing Sites.

It was 1969 that humans first set foot on the Moon. Back then, the Apollo mission was the focus of the attempts to land on the Moon but now, over 50 years on, it looks like we are set to head back. The Artemis project is the program that hopes to take us back to the Moon again and it’s going from strength to strength. The plan is to get humans back on the Moon by 2025 as part of Artemis III. As a prelude to this, NASA is now turning its attention to the possible landing sites. 

The Artemis Project is NASA’s program aimed at returning humans to the Moon and establishing a permanent base there. Ultimately with a view to paving the way for missions to Mars. With the first launch in 2017, Artemis intends to land “the first woman and the next man” on the lunar surface by 2025.  The program began with Artemis I and an uncrewed mission which orbited the Moon. Arte is II will take astronauts on an orbit of the Moon and finally Artemis III will land humans back on the Moon by 2025. At the heart of the program is the giant Space Launch System (SLS) rocket and the Orion spacecraft. 

NASA’s Space Launch System rocket carrying the Orion spacecraft launches on the Artemis I flight test, Wednesday, Nov. 16, 2022, from Launch Complex 39B at NASA’s Kennedy Space Center in Florida. Credit: NASA/Joel Kowsky.

As the plans ramp up for the first crewed landing, NASA are now analysing possible landing sites and have identified nine potential spots. They are all near the South Pole of the Moon and will provide Artemis III with landing sites near to potentially useful resources. Further investigations will be required to further assess them for their suitability. 

The team working upon the analysis is the Cross Agency Site Selection Analysis team and they will work with other science and industry partners. The teams will explore each possible site for science value and suitability for the mission including the availability of water ice. The final list so far, and in no particular order, are;

  • Peak near Cabeus B
  • Haworth
  • Malapert Massif
  • Mons Mouton Plateau
  • Mons Mouton
  • Nobile Rim 1
  • Nobile Rim 2
  • de Gerlache Rim 2
  • Slater Plain

The South Polar region was chosen as a region was chosen chiefly because it has water locked up deep in the shadowed craters. The Apollo missions never visited that region of the Moon either so it is a great opportunity for humans to explore this aged region of the lunar surface. To settle on these 9 areas, the team assessed various regions of the south polar region using potential launch window suitability, terrain suitability, communication capability and even lighting levels. The geology team also looked at the landing sites to assess their scientific value 

Apollo 17 astronaut Harrison Schmitt collecting a soil sample, his spacesuit coated with dust. Credit: NASA

NASA will finally settle on the appropriate landing site based upon the decision for the launch date. Once that has been confirmed it will determine the transfer trajectories to the Moon, the orbital paths and the surface environment. 

Source : NASA Provides Update on Artemis III Moon Landing Regions

The post NASA Focusses in on Artemis III Landing Sites. appeared first on Universe Today.



The Connection Between Black Holes and Dark Energy is Getting Stronger

The discovery of the accelerated expansion of the Universe has often been attributed to the force known as dark energy. An intriguing new theory was put forward last year to explain this mysterious force; black holes could be the cause of dark energy! The theory goes on to suggest as more black holes form in the Universe, the stronger the pressure from dark energy. A survey from the Dark Energy Spectroscopic Instrument (DESI) seems to support the theory. The data from the first year of operation shows the density of dark energy increases over time and seems to correlate with the number and mass of black holes! 

Cast your mind back 4 billion years to the beginning of the Universe. Just after the Big Bang, the moment when the Universe popped into existence, there was a brief period when the Universe expanded faster than the speed of light. Before you argue that nothing can travel faster than the speed of light we are talking of the very fabric of space and time expanding faster than the speed of light. The speed of light limit relates to travel through the fabric of space, not the fabric of space itself! This was the inflationary period. 

This illustration shows the “arrow of time” from the Big Bang to the present cosmological epoch. Credit: NASA

The energy that drove the expansion in the early Universe shared similarities with dark energy, the repulsive force that seems to permeate the Universe and is driving the current day accelerated expansion of the Universe.

What is dark energy though? It is thought to make up around 68% of the Universe and, unlike normal matter and energy seems to have a repulsive force rather than attractive. The repulsive nature was first inferred from observations in the late 1990’s when astronomers deduced the rate of acceleration when observing distant supernova. As to the nature of dark energy, no-one really knows what it is or what it comes from, that is, until now. 

Artist’s illustration of a bright and powerful supernova explosion. (Credit: NASA/CXC/M.Weiss)

A team of researchers from the University of Michigan and other institutions have published a paper in the Journal of Cosmology and Astroparticle Physics. In their paper they propose that black holes are the source of dark energy. Professor Gregory Tarle said ‘Where in the later Universe do we see gravity as strong as it was at the beginning of the Universe?’ The answer, Tarle goes on to describe is the centre of black holes. Tarle and team propose that what happened during the inflation period runs in reverse during the collapse of a massive star. When this happens, the matter could conceivably become dark energy. 

The team have used data from the Dark Energy Spectroscopic Instrument (DESI) which is mounted upon the 4m Mayall telescope at Kitt Peak National Observatory. The instrument is essentially 5,000 computer controlled fibre optics which cover an area of the sky equal to about 8 square degrees. The evidence of dark energy is achieved by studying tens of millions of galaxies. The galaxies are so far way their light takes billions of years to reach us. We can use the information to determine how fast the Universe is expanding with unprecedented precision. 

Stu Harris works on assembling the focal plane for the Dark Energy Spectroscopic Instrument (DESI), which involves hundreds of thousands of parts, at Lawrence Berkeley National Laboratory on Wednesday, 6 December, 2017 in Berkeley, Calif.

The data shows evidence that dark energy has increased with time. This is not perhaps in itself surprising but it seems to accurately mirror the increase in black holes over time too. Now that DESI is operational, more observations are required to hunt down the black holes and try to quantify their growth over time to see if there really is merit in this new exciting hypothesis. 

Source : Evidence mounts for dark energy from black holes

The post The Connection Between Black Holes and Dark Energy is Getting Stronger appeared first on Universe Today.