Measuring the Big Bang with the COBE satellite

Measuring the Big Bang with the COBE satellite

4 min read

Measuring the Big Bang with the COBE satellite

By John Mather
Cosmic Background Explorer satellite (COBE)

The Cosmic Background Explorer satellite (COBE) went up on a Delta rocket on Nov. 18, 1989, into a polar sun-synchronous orbit 900 km up. Our team at NASA Goddard Space Flight Center (GSFC), Ball Aerospace, the Jet Propulsion Laboratory (JPL) and universities built it to look at the cosmic microwave and infrared background light that comes to us from the distant universe, so far away that it seems to be a nearly uniform glow. With it, we started the new subject of precision cosmology; before the COBE very little was known except the general idea of an expanding universe, misnamed the Big Bang. (It’s misnamed because the name conjures up the image of a firecracker, happening at a place and a time. Astronomers see an infinite universe expanding into itself, with no center, no edge and no first moment.)

Our team measured the spectrum of the cosmic heat ― more precisely the cosmic microwave background radiation― left over from times when the universe was compressed and hot, with a precision of 50 parts per million. The prediction was for a nearly perfect blackbody spectrum, and it matched. No other story of the universe was ever able to explain that. We also found the hot and cold spots of the heat radiation, known as anisotropy (Greek for not the same in every direction). Stephen Hawking said that was the most important scientific discovery of the century, if not of all time.

Now we know that: a.) the spots are responsible for our existence, because gravity acting on the regions of higher density was able to stop the matter from expanding; b.) most of the spots are caused by dark matter; and c.) if we ever know what made the spots, we might understand quantum gravity. In 2006, I got a call from Stockholm, and the Nobel Prize in Physics went to me and to George Smoot in recognition of the work of our team. Now the entire world knows what we know: it was really important.

We started in 1974, just 5 years after the first Apollo landing on the Moon, when NASA announced opportunities to propose new satellite missions. I had just finished my thesis project in January and taken a job with NASA’s Goddard Institute for Space Studies in New York City to become a radio astronomer. My thesis project at the University of California, Berkeley, was intended to measure that cosmic background radiation, but it failed to function properly. Yet only months after my arrival in New York, NASA announced the opportunity. My advisor Pat Thaddeus knew what to do: call up our friends and write a proposal. (One of those friends is Rainer Weiss of the Massachusetts Institute of Technology, who was also working on gravitational wave detection. He shared the 2017 Nobel Prize for detecting gravitational waves from merging black holes.)

I never expected our proposal to be chosen, but it was, thanks to far-seeing people at Headquarters like Nancy Boggess, and NASA created a new science team including people from two competing teams. Anticipating that choice, Mike Hauser recruited me to Goddard in Greenbelt, Maryland, and I was hoping to become the lead scientist. Soon Goddard assigned a brilliant team of engineers, who were just completing the IUE observatory, to help us along. We built up a team that eventually included 1,500 contributors, including a science team of 19 spread around the country.

The project was extraordinarily challenging, and became the largest in-house project Goddard has ever done. We brought the work in, because we were pushing so far beyond known engineering that it was impossible to write a contract specification; I spent much of my life in the offices of engineers seeking approaches to doing the impossible. I trusted my future to them, and they to me. In the end, our mission worked beautifully, after many changes, including a redesign after the Challenger loss made it clear we would not be launched on the shuttle.

NASA and its partner agencies like the European Space Agency and Canadian Space Agency are the only places in the known universe where space science and space engineering meet so intimately, where engineers build what has never been built before, so scientists may discover what has never been known before. I can only marvel at the works we have done, and imagine what we may yet do together.

About the Author

  • John C. Mather

    John C. Mather is a senior astrophysicist in the Observational Cosmology Laboratory at NASA’s Goddard Space Flight Center (GSFC). His research centers on infrared astronomy and cosmology. As an NRC postdoctoral fellow at the Goddard Institute for Space Studies, New York City, he led the proposal efforts for the Cosmic Background Explorer (1974-1976), and came to GSFC to be the study scientist (1976-1988), project scientist (1988-1998), and also the principal investigator for the Far IR Absolute Spectrophotometer (FIRAS) on COBE. As senior project scientist (1995-present) for the James Webb Space Telescope, Dr. Mather leads the science team and represents scientific interests within the project management. He has received many awards including the 2006 Nobel Prize in Physics for his precise measurements of the cosmic microwave background radiation using the COBE satellite.

A head-and-shoulders portrait of a smiling man with fair skin, light brown hair swept to the side, and gold-rimmed glasses. He is wearing a dark suit jacket over a light blue dress shirt and a dark tie. Behind him is a poster or illustration depicting a satellite with solar panels orbiting Earth against a dark, starry space background. The lighting is bright and professional, suggesting an office or academic setting.

Share

Details

Last Updated
Feb 18, 2026

Related Terms

Powered by WPeMatico

Get The Details…

Peering Homeward, 1972

Peering Homeward, 1972

7 min read

Peering Homeward, 1972

By Laura Rocchio
A grayscale satellite image shows a vast, textured landscape from a high-altitude top-down perspective. The terrain is characterized by prominent, winding geological folds and ridges that create a series of concentric, wavy patterns across the surface. A dark, thin river or stream meanders through the center of the image, cutting across the rugged topography. The varied shades of gray indicate different types of land cover or rock formations, with some darker patches likely representing water or dense vegetation and lighter areas highlighting the crests of the ridges.
The scientists and engineers at NASA Goddard looking at the first MSS images were looking at just one band of data, so the images appeared black and white to them. The image shows the area on that July 25, 1972 image that initially had them concerned that something was wrong with the imagery (an area in the Ouachita Mountains). 
NASA/USGS

On July 23, 1972 the first civilian satellite designed to image Earth’s land surfaces was launched from Vandenberg Air Force Base in California. On board the satellite, originally named the Earth Resources Technology Satellite (ERTS), but later known as Landsat 1, were two sensors. The primary sensor, called the Return Beam Vidicon (RBV), used three shuttered cameras to take photographs; the secondary sensor, the Multispectral Scanner System (MSS) was an experimental instrument.

Both sensors were packed onto a “butterfly-shaped” spacecraft repurposed from the successful Nimbus weather missions. There were strict size and weight limitations for the sensors, especially the experimental MSS that weighed less than the primary RBV sensor and the data recorder. (At over 150 pounds, the data recording system onboard was the biggest recording device ever orbited.)

A color composite (MSS bands 6,7,5)
A color composite (MSS bands 6,7,5) showing the first cloud-free land image acquired by the Landsat 1 multispectral scanner system (MSS), on July 25, 1972, including the Ouachita Mountains in southeastern Oklahoma. The dark stripe above the image center results from several dropped MSS scanlines.
NASA/USGS

The MSS technology was a novel way of looking at Earth. It used a scanning mirror to build up an image pixel-by-pixel with six scan lines sweeping across the satellite’s ground path 13.62 times per second as the satellite hurtled around Earth at over 14,400 mph. As the first civilian imaging scanner to orbit Earth, many of the scientists and engineers outside the small cadre of scanner enthusiasts wondered if the satellite’s MSS instrument would be able to successfully produce an image traveling at such a high velocity. This made for a harrowing day when the first imagery was transmitted back to Earth two days after launch.

A group of Landsat scientists and engineers gathered in the Landsat data processing facility at NASA’s Goddard Space Flight Center as the first MSS digital transmission was translated onto 70-mm film by an electron beam recorder and then displayed. As they watched the first imagery scroll by they saw clouds, more clouds, and finally land… but the black and white image had irregular wavy lines on it.

“It’s terrible. It has moiré patterns,” a technician lamented. Quickly those in the room figured out where the image was showing geographically—the Ouachita Mountain region of southeastern Oklahoma. Then the geologists in the room realized that they were seeing the curvilinear outcrops of the ancient mountains.

Landsat 1’s Return Beam Vidicon (RBV) cameras, built by RCA.
Landsat 1’s Return Beam Vidicon (RBV) cameras, built by RCA. 
NASA

Anxiety transformed into excitement. NASA geologist Nicholas Short, who had been unconvinced of the utility of land remote sensing for geology, turned to the NASA Deputy Associate Administrator for Space Applications and said, “I was so wrong about this. I’m not going to eat crow. Not big enough. I’m going to eat raven.”

USGS cartographer Alden Colvocoresses, who had been cynical about any cartographically accurate data being collected with “a little mirror in space,” turned to his colleagues in the room and said simply, “Gentlemen, that’s a map.”

To the surprise of many, it was the ride-along secondary instrument of Landsat 1, the experimental Multispectral Scanner System that became the mission’s imaging powerhouse.

The MSS instrument represented many “firsts.” It was the first space-based sensor to digitally encode and transmit Earth surface data; the first Earth-observing instrument to obtain in orbit calibration data, which meant it was the first instrument Earth-scientists could use to make robust comparisons of changes to Earth’s surface over time and across geographies. It quickly proved itself better than the primary Return Beam Vidicon instrument—and a good thing too because just 15 days after launch a major electrical short associated with the RBV’s power-switching circuit caused enough problems that the RBV was shut down for the rest of the satellite’s mission.

The MSS data’s accurate geometric fidelity made it a major cartographic tool, and the low sun angle of Landsat’s mid-morning acquisition time accentuated shadows of topographic features making the images especially valuable to geologists; but many fields including agriculture, forest management and marine studies found the data useful.

A diagram of a Multispectral Scanner System (MSS) instrument.
A diagram of a Multispectral Scanner System (MSS) instrument.
NASA/Hughes Santa Barbara Research Center

The Explorer 1 mission had begun the U.S. forays into space, yet a striking realization that came from the space-bound missions that followed Explorer 1 in quick succession (Mercury, Gemini, Apollo) was that space offered a distinctive vantage point for observing our home planet.

A few months prior to the Landsat 1 launch, Secretary of the Interior and Landsat champion, Stuart Udall, had explained to The New York Times, “I thought an Earth applications program was a perfect means of bringing the benefits of space back to Earth.”

Once Landsat and its MSS instrument had proved itself after launch, NASA Administrator James C. Fletcher confirmed Udall’s belief, remarking that Landsat was “a second giant stride for mankind” because of the new technology’s potential to improve the understanding of environmental issues. He went on to say that Landsat had a “profound effect on the thinking of the world, particularly on our approach to emerging problems of protecting our environment and maintaining the quality of life for all of Earth’s people…not just clean air and water, but clean land.”

The First Space-Based GPS Satellite Tracking Experiment, 1982

On July 16, 1982 the fourth Landsat satellite—carrying “the most complex and pioneering Earth viewing instrument ever proposed for a NASA program” at the time—took to the sky.

Nearly everything about this second-generation Earth observation satellite had been upgraded from its Landsat 1, 2, and 3 predecessors. In addition to an MSS sensor, Landsat 4 carried a second-generation Earth-observing sensor, called the Thematic Mapper or TM instrument. The TM, a more advanced version of the MSS, was only one aspect of the mission’s radical redesign.

A line drawing showing a cross-section view of a space shuttle with a satellite deployment system. The illustration depicts the shuttle's cargo bay open with a satellite positioned for deployment from within the spacecraft.
Artist’s concept of the Landsat 4 satellite in position for repair in the Space Shuttle cargo bay. 
NASA/Hughes Santa Barbara Research Center

The Landsat 4 spacecraft was a custom-designed platform and not a re-purposed Nimbus weather satellite platform used for the first three Landsats. But the mission requirements were many—the satellite was required to be Space Shuttle rendezvous ready (for the concept of Shuttle-based repairs); to carry a large antenna (at the end of a long 12.5 foot boom) for communicating with NASA’s Tracking and Data Relay Satellite System (TDRSS); and to carry a GPS receiver.

Schematic showing the Landsat 1 satellite in orbit and how the MSS used a scan mirror to build an image six lines at a time as it traveled over its ground path.
NASA

Landsat 4 was the very first civilian satellite to carry a spaceborne GPS receiver package and to use GPS signals to calculate its position. The concept of GPS was so new at this time that in Landsat 4 press communications, the acronym “GPS” had to be written out and described as “a new US Air Force satellite navigation system involving orbiting navigational satellites to triangulate the exact position of other satellites which require navigation information as part of their data communication to Earth Stations.”

GPS receivers were used on both Landsat 4 and 5 satellites to assess if GPS could deliver more accurate position-location data than data gathered from traditional methods (ground-predicted ephemeris, or mathematically modeled locations).

GPS was in its infancy and only 4 of the planned 24 GPS constellation satellites were in orbit at the time of Landsat 4’s launch. So there were often times during Landsat 4’s orbit when no GPS satellites were in range.

Two researchers at NASA’s Goddard Space Flight Center, Howard Heuberger and Leonard Church, presented a paper on the Landsat 4 GPS navigation results demonstrating that GPS could establish Landsat 4’s position to within 50 meters, and its velocity within six centimeters per second—when the GPS satellites were in view. Though these error margins grew exponentially when GPS satellites were out of reach (because of lapses between measurements), Heuberger and Church concluded that GPS was a good alternative for supplying onboard ephemeris to future spacecraft systems even before the full GPS constellation was in orbit.

An exploded-view diagram showing integral pieces of Landsat 4’s instruments and design.
Drawing sowing the breakout diagram of the instruments individual components.
NASA

The experiment was largely a success, but deemed not ready for operational use. It was not until the launch of Landsat 8 in 2013—almost three decades after the Landsat 4 GPS experiment—that GPS receivers would become a routine part of Landsat spacecraft.

For an exhaustive technical history of the Landsat program, see the new book: Landsat’s Enduring Legacy: Pioneering Global Land Observations from Space.

Share

Details

Last Updated
Feb 18, 2026

Related Terms

Powered by WPeMatico

Get The Details…

My NASA Experience

My NASA Experience

4 min read

My NASA Experience

By Marcia J. Rieke
An image of the NIRCam Engineering Diagram

The development of infrared detector arrays is intertwined with my experiences working on NASA projects. As an astronomer at a university, my interactions with NASA all start with a proposal in response to an opportunity. In 1983, near-infrared detector arrays were beginning to attract the attention of astronomers. At the suggestion of Nancy Boggess at NASA Headquarters, we wrote a proposal to the NASA Research and Analysis Program to obtain an array and test it. At the time, I was a member of the Infrared Astronomy Group working with George Rieke using a single light-sensing element (e.g. a 1 pixel array!) on ground-based telescopes, and I was only starting to become cognizant of astronomy opportunities with NASA.

In this initial proposal, we wrote that the array we were contemplating acquiring from what was then called Rockwell International (now Teledyne Imaging Systems), would potentially be useful for infrared instruments on HST. We were not thinking of proposing such an instrument ourselves as we were preoccupied with proposing an instrument for the SIRTF which was later re-named Spitzer.

Our proposal was selected, and we purchased a 32×32 HgCdTe array (wow, a whole kilopixel!). Taking a device to the telescope where one could actually take an infrared picture rather than creating a picture by scanning a single pixel back and forth made me feel even happier than a kid in the candy store. Some of my colleagues called it my “toy” camera, but it was so much fun. I remember characterizing the performance of this array, since performance would be of obvious great importance if such arrays were to be used on future NASA missions.

During testing, I discovered that the dark current of our first device was orders of magnitude less than what Rockwell had quoted. This needed to be understood because if my result was correct, then this class of infrared array would be a candidate for second generation HST instruments. I called Rockwell, and quizzed the staff about how they had measured the dark current on the array that they had sent us. The Rockwell test engineer explained that he had put a piece of aluminum foil over the dewar window to ensure that the array was in the dark. Well, that was the answer. Yes, the aluminum foil prevented visible light from entering the test dewar, but since it was at room temperature, it was emitting loads of infrared photons. Based on this discovery we decided to propose for a second generation HST instrument which eventually became “NICMOS.” As part of the development funding for that instrument, we moved all the way up to a 256×256 pixel array – 65.5 kilopixels but still not even 1 Mpixel camera. As a result of my involvement in the early steps of working with HgCdTe arrays, I became the Deputy PI for NICMOS, and became deeply involved in a NASA project. NICMOS was the first use in space of the style of near-infrared array that has now become the standard for infrared arrays.

Near the end of my involvement with NICMOS and before Spitzer was launched, another opportunity presented itself. People were discussing a “Next Generation Space Telescope” that would push the limits of detectability back to the first stage of galaxy formation. I replied to a letter soliciting members, and I set out to work on this new project. I stuck with it, and responded to the Announcement of Opportunity in 2001, and this triggered a change of events that has led to my being PI of the NIRCam instrument on the James Webb Space Telescope. The detector arrays in NIRCam are each 2028×2048 pixels (eg. 4 Megapixels) with the entire camera holding 40 Megapixels, a long way from my first 1 kilopixel array camera!

About the Author

  • Marcia J. Rieke

    Marcia J. Rieke is a professor of Astronomy at the University of Arizona and is the principal investigator for the near-infrared camera (NIRCam) on the James Webb Space Telescope. Rieke came to the University of Arizona (UA) in 1976 and has made seminal contributions to infrared astronomy. She has served as the deputy principal investigator on the Near Infrared Camera and Multi-Object Spectrometer for the Hubble Space Telescope (NICMOS), and the outreach coordinator for the Spitzer Space Telescope. A fellow of the American Academy of Arts and Sciences, Rieke received her undergraduate and graduate degrees in physics from the Massachusetts Institute of Technology, Boston, Massachusetts.

A close-up, professional headshot of a woman with fair skin, blue eyes, and light brown hair pulled back from her forehead. She is wearing thin, oval-shaped glasses and green hoop earrings that match her light green, floral-patterned button-down shirt. She has a gentle, closed-mouth smile and is set against a soft, neutral brown gradient background.

Share

Details

Last Updated
Feb 18, 2026

Related Terms

Powered by WPeMatico

Get The Details…

The Gestation of the Hubble

The Gestation of the Hubble

14 min read

The Gestation of the Hubble

By Nancy Grace Roman
hubble

Looking through the atmosphere is like looking through a piece of old stained glass. The glass has defects that distort the image. The atmosphere also has defects that distort the image, but the defects in the atmosphere move, thus blurring the image as well. The glass is colored, so only some colors get through.

Until the mid-20th century, that did not appear to be a major problem. Stars primarily radiated like black bodies, and their temperatures were such that their radiation came through the atmosphere and our eyes adapted to seeing it. The development of radio astronomy, as a result of the technology stimulated by World War II, proved that the universe was far more complex and far more interesting than the staid view in the visible. This made astronomers eager to detect colors that do not come through atmosphere. In addition, the glass is dusty. The dust scatters light making the background brighter and harder to see through. The molecules in the atmosphere also scatter light. This is why we cannot see stars in the daytime. It also keeps us from seeing the faintest stars at night. Finally, unlike the glass, the atmosphere shines faintly, making the faintest objects invisible from the ground.

For these reasons, astronomers had been anxious for decades to put telescopes above the atmosphere, and they jumped at the opportunity provided by the opening of the Space Age. The first NASA astronomy missions hunted for high-energy radiation in gamma ray and X-ray regions of the spectrum. These searches relied on techniques that had been developed for decades for the measurement of cosmic rays and for studying high-energy phenomena in laboratories.

We knew from rocket observation that the Sun displayed interesting effects in the ultraviolet that changed continuously. This was an impetus behind the Orbiting Solar Observatories. Stellar astronomers were also interested in the ultraviolet. Young, massive stars emit most of their energy in that region. In addition, the strongest and simplest lines of common, light elements are in the ultraviolet. Without observations of these lines, it was impossible to analyze the compositions of stars. This led to the development of the Orbiting Astronomical Observatories with their emphasis on the ultraviolet of stars. We were less interested in the infrared at that time, and detector technology was too primitive to make this region easily accessible.

These instruments provided an exciting introduction to space astronomy, but astronomical objects are very distant. That makes them appear faint and tiny. A large mirror is required to collect enough light to analyze any but the brightest stars. The fineness of the detail that is discernible is a direct function of the size of the mirror. Thus, to take advantage of the dark sky and steady images above the atmosphere requires a large mirror. For decades, astronomers had longed for a large space telescope. In 1946, Lyman Spitzer wrote a short paper for the Rand Corporation describing the science that could be learned with a 4-meter telescope in space. This is generally considered the impetus for such a telescope in the U.S.

From time to time, NASA asks the National Academy of Sciences (NAS) for advice on its science program. In the summer of 1962, the Academy assembled a group of scientists at the University of Iowa, dividing the group into various committees representing different areas of science, including one for astronomy. One astronomer had studied the characteristics of the Saturn rocket and determined that it could carry a 3-meter telescope. The entire astronomy committee jumped on the idea. That is what they really wanted. I thought that it was too early to start work on such a project. I knew how much trouble we were having trying to develop a satellite and instrumentation for a 6-inch telescope. This telescope was not successful until 1968. Thus, I essentially ignored the idea.

At that time, NASA’s Langley Research Center (LaRC) was responsible for NASA’s human space program. Some of the engineers there jumped on the idea of developing a large, manned orbiting telescope. The NAS conducted another study in the summer of 1965. By this time, the astronomers only argued about whether the telescope should be in orbit or on the Moon. The latter would provide a stable base, making the telescope less sensitive to the motion of parts, and also provide a reference system for the pointing controls. Connected to a manned base, it could be used much as ground-based telescopes are used. There were also disadvantages with the Moon. Perhaps the most serious one was that it was unclear how soon such an installation would be feasible. The Moon appeared to be undesirably dusty. Moreover, its motion is complex, making the guidance difficult before modern computers were well developed. Nevertheless, the issue remained alive until the early 1970s.

Several aerospace companies were intrigued by the LaRC idea and presented designs for a manned, large space telescope. This was the last thing astronomers wanted! Aside from the fact that research had not been done by a person looking through a telescope for almost a century, with one small exception, a man needed an atmosphere, and that was what we were trying to get away from. In addition, a man would wiggle during long exposures and that would cause the telescope floating in orbit to wiggle in the opposite direction, blurring the image. I still thought it was too early to design a satellite for a 3-meter telescope, but decided that if companies were going to spend money designing such a satellite system, they might as well design a usable one.

A major problem at this stage was to win the support of the general astronomical community, many of whom had no interest in observations from space. One facet of attacking this problem was to set up a working group under the auspices of the National Academy of Sciences (NAS) on the uses of a Large Space Telescope (LST), under the direction of Lyman Spitzer.

The committee held an early meeting in Pasadena, California, to discuss the use of such a telescope for studies of galaxies, cosmology, and interstellar matter. Numerous West Coast astronomers attended the meeting, increasing their understanding of the possibilities and, hence, somewhat decreasing their antipathy. Although the members of the working group were supporters, the cachet of the NAS gave their report, which was published in 1969, special importance. I met with many astronomers to discuss the promise of a 3-meter telescope above the atmosphere. I addition, I gave many illustrated public talks on the questions that we expected such a telescope to answer, although I also emphasized that the most important results would be those we could not predict.

The Astronomy Working Group that had been established to advise me on the entire astronomy program also started to discuss what was really needed for a successful LST and the engineering problems that required solution. By 1971, I assembled an LST Science Steering Group to work only on the LST. For this, I assembled a group of astronomers from all over the country representing various interests that could be served by a large space telescope and some NASA engineers to sit down and outline a design that would meet the needs of the astronomers and that the engineers thought would be doable. Purposely, I included several who were not really enthusiastic about the project but whose science could benefit from the program. Together, we sketched the system that would become the basis for the Hubble.

After about 2 years, a more detailed design was needed. NASA’s Marshall Space Flight Center was assigned the responsibility for turning our sketch into a design. I maintained a general overview of the continued developments as program scientist, but Robert O’Dell was hired in September 1972 as the project scientist, with the detailed responsibility for keeping the scientific requirements at the center of the planning.

At one point, there was a strong push to decrease the diameter of the mirror, probably to make use of facilities that existed for other purposes. We were asked to consider mirror sizes of 2.4 m and 1.8 m. A primary objective of the telescope was to determine the brightness of Cepheid variables in the Virgo cluster of galaxies. Hubble had shown that the velocity of recession of distant galaxies was proportional to their distance. However, the proportionality constant was uncertain by a factor of two. Galaxies have random motions. The velocities of distant galaxies are small compared to the velocity caused by expansion, but for nearby galaxies, these random motions overwhelm the general expansion. Moreover, the nearby galaxies are in a group in which they interact gravitationally.

To determine the proportionality constant it was necessary to determine the distance of a cluster of galaxies not interacting with nearby galaxies and distant enough that the random velocities are not significant on the average. The nearest suitable cluster is the Virgo cluster of galaxies at a distance of about 54 million light years. Henrietta Leavitt had shown that the brightness of a particular class of variable stars, called Cepheids, was an accurate function of the periods of variation. We could calibrate this relation for Cepheids in the Milky Way galaxy. Thus, if we could observe these variables in the Virgo cluster, we could determine the distance of the cluster. Measuring the velocity of the expansion was easy. I and, independently, several others determined that with the available detectors, we could reach the Cepheid variables in the Virgo cluster with a 2.4 m mirror but that we could not do so with a 1.8 m mirror. Dropping the mirror diameter to 2.4 m also made the design of a satellite that would fit the space shuttle easier.

As the early design developed, it was necessary to make a place for the project in the NASA plans. It was relatively easy to convince my superiors in NASA that such a telescope would be worth the cost. Convincing the political community, with little understanding of science was more difficult. James Webb, the administrator of NASA at that time gave a series of dinners for men with political power. After each dinner, three of us presented a “dog and pony show.” Jesse Mitchell discussed the engineering and its feasibility, Dick Halpern presented the management plans, and I described the scientific research we expected to do with the telescope. I never testified before Congress, but I did write congressional testimony to justify the Large Space Telescope for about 10 years. I also pitched the case for the telescope to representatives of the Bureau of the Budget (now the Office of Management and Budget), the agency that prepares the budget the president sends to Congress. At some point, for political reasons, the word “Large” was dropped from the name with the satellite simply becoming the Space Telescope until launch.

In spite of these efforts, Congress continuously postponed approval for construction. Even after construction was started, Congress cut the budget below an optimum level. Of course, this increased the final cost of the mission. By the early to mid-1970s, astronomers organized major lobbying efforts. This finally led to the approval of the project.

At one point, then-Sen. William Proxmire (D-Wisconsin), noted for ridiculing government funding that he considered frivolous, asked NASA why the American taxpayer should support an expensive telescope. I did a back-of-envelope calculation and determined that for the cost of one night at the movies, every American would have 15 years of exciting discoveries. I was probably off by a factor of four or five, depending on how launch and servicing costs are allocated, but we shall probably have 25 years of discoveries. Even at a cost of a night at the movies once a year, which would more than cover costs by any accounting, I believe that most Americans believe that the expenditure has been worth it.

At the time the Hubble was being designed, NASA was pitching the space shuttle as a cheap way to launch spacecraft. To lower the costs, a busy launch schedule was required. Therefore, all satellites were designed to be launched by the shuttle and several were designed to be serviceable. The Hubble was scheduled to be launched by the next flight after the Challenger accident. That catastrophe cancelled all shuttle launches for 3 years, during which the satellite was kept in storage and a knowledgeable group of engineers kept on the payroll until the 1990 launch. These 3 wasted years also added significantly to the cost of the mission.

The Challenger experience caused NASA to rethink its use of the shuttle for most missions. Most payloads had to be redesigned for robotic launches. Fortunately, the Hubble was too far along to be changed. The ability to service it with the shuttle not only saved the basic mission after the mirror problem was discovered, but also provided the possibility of replacing instruments from time to time by more modern versions, thus greatly increasing the capability of the telescope.

As mentioned earlier, I started funding development of detectors early in the program. A major portion of the funding for ultraviolet detectors went to Princeton University which subcontracted to Westinghouse for the development of an intensified vidicon for the telescope camera. The Steering Group, and later the Working Group, assumed that this detector was already chosen. As the time approached for the selection of the scientific instruments for the telescope, I was unsatisfied with the progress on the intensified vidicon. At a Steering Group meeting shortly before the selection of the instruments, I arranged a presentation of various types of detectors.

Charged coupled devices (CCDs) had clear advantages in resolution, sensitivity, and stability. These are arrays of tiny, solid state chips (pixels) each sensitive to photons. At the conclusion of an exposure, the intensity recorded by each chip is read sequentially down a column, and then the sums are read across. In this way, a map of intensity as a function of position, that is, a picture is obtained. Commercial establishments were strongly interested in supporting their development. (They are the basis of the modern digital camera and are also used for TV cameras.) A problem is that a bare CCD is not sensitive in the ultraviolet. Nevertheless, as a result of this presentation, the Working Group decided to open the choice of detector for the camera. When a proposal from Jim Westphal solved the ultraviolet sensitivity problem by coating the CCD surface with an organic substance that fluoresced in the visible when hit with ultraviolet light, the vidicon lost the competition.

Many in the astronomical community were unhappy with NASA management of the Space Telescope. They wanted it in the hands of astronomers with a management contractor in the way that the National Optical and Radio Observatories were handled. This overlooked the fact that the scope of the LST construction and operation was far larger than that of the ground-based observatories. Nevertheless, there was one area in which the community insistence on operation by scientists was non-negotiable – the scientific management of the operation. This nearly cost me my job. Goddard badly wanted the scientific operation of the telescope. After considering this, I decided that it was much too big a job for the small astronomy group at Goddard, even if the astronomical community would have stood still for such an arrangement. As a result, the scientific and astronomy leaders at Goddard talked Noel Hinners into to transferring me to a different job. I decided that I did not want the other job and stayed put for a year or so.

I took advantage of an early-out period to retire in 1979, but continued for 9 months longer as the Space Telescope program scientist in order to participate on the Source Selection Board for the Space Telescope Science Institute, which would manage the scientific operations of the Space Telescope. I found this an interesting experience. There were five proposals, four of which based the Institute at Princeton University. The proposals from Associated Universities Incorporated, which managed the National Radio Astronomy Observatories, and from Associated Universities for Research in Astronomy, which managed the National Optical Astronomy Observatories, were highly competitive, and the decision between them was difficult. The latter, placed the Institute at Johns Hopkins University in Baltimore. Many people believed that it was selected because Baltimore is closer to Goddard. That has helped over time but did not enter our deliberations.

I left the project before substantial management problems arose, leaving their solution to my successor, Ed Weiler. He also had to handle the discovery of the mirror problem. It was clear from his actions in these major fiascos that I had left the project in good hands.

About the Author

  • Nancy Grace Roman

    Nancy Grace Roman received her Ph.D. in astronomy from the University of Chicago in 1949. She joined NASA in 1959 and became the first chief of astronomy in the Office of Space Science, where she had oversight for the planning and development of programs including the Cosmic Background Explorer and the Hubble Space Telescope. Dr. Roman finished her NASA career at the Goddard Space Flight Center, retiring as manager of the Astronomical Data Center in 1979, and continued to work at Goddard as a contractor. The first woman to hold a leadership position at NASA, Dr. Roman has been an advocate for woman in the sciences throughout her career.

A black-and-white historical photograph shows a smiling woman with short, wavy hair and cat-eye glasses looking toward the camera. She is positioned next to a technical model of a satellite, which features a grid-patterned dish and protruding scientific instruments. The woman is wearing a light-colored button-down shirt, and the background is a simple, bright indoor space, highlighting her and the model as the primary subjects.

Share

Details

Last Updated
Feb 18, 2026

Related Terms

Powered by WPeMatico

Get The Details…

NASA’s Hubble Identifies One of Darkest Known Galaxies

NASA’s Hubble Identifies One of Darkest Known Galaxies

3 min read

NASA’s Hubble Identifies One of Darkest Known Galaxies

At left, a field of space with a dozen white foreground stars and a number of small, yellow background galaxies. An unremarkable area at center is outlined with a dashed red circle surrounded by a white box. Lines extend from the box to a pullout at right containing faint, grainy white light surrounded by a red circle labeled u201cCandidate dark galaxy u2013 diffuse emission.u201d Four white dots are circled in blue and labeled globular clusters.
The low-surface-brightness galaxy CDG-2, within the dashed red circle at right, is dominated by dark matter and contains only a sparse scattering of stars. The full image from NASA’s Hubble Space Telescope is at left.
NASA, ESA, Dayi Li (UToronto); Image Processing: Joseph DePasquale (STScI)

In the vast tapestry of the universe, most galaxies shine brightly across cosmic time and space. Yet a rare class of galaxies remains nearly invisible — low-surface-brightness galaxies dominated by dark matter and containing only a sparse scattering of faint stars.

One such elusive object, dubbed CDG-2, may be among the most heavily dark matter-dominated galaxies ever discovered. (Dark matter is an invisible form of matter that does not reflect, emit, or absorb light.) The science paper detailing this finding was published in The Astrophysical Journal Letters.

Detecting such faint galaxies is extraordinarily difficult. Using advanced statistical techniques, David Li of the University of Toronto, Canada, and his team identified 10 previously confirmed low-surface-brightness galaxies and two additional dark galaxy candidates by searching for tight groupings of globular clusters — compact, spherical star groups typically found orbiting normal galaxies. These clusters can signal the presence of a faint, hidden stellar population.

To confirm one of the dark galaxy candidates, astronomers employed a trio of observatories: NASA’s Hubble Space Telescope, ESA’s (European Space Agency) Euclid space observatory, and the ground-based Subaru Telescope in Hawaii. Hubble’s high-resolution imaging revealed a close collection of four globular clusters in the Perseus galaxy cluster, 300 million light-years away. Follow-up studies using Hubble, Euclid, and Subaru data then revealed a faint, diffuse glow surrounding the star clusters — strong evidence of an underlying galaxy.

“This is the first galaxy detected solely through its globular cluster population,” said Li. “Under conservative assumptions, the four clusters represent the entire globular cluster population of CDG-2.”

NASA’s Goddard Space Flight Center; Lead Producer: Paul Morris

Preliminary analysis suggests CDG-2 has the luminosity of roughly 6 million Sun-like stars, with the globular clusters accounting for 16% of its visible content. Remarkably, 99% of its mass, which includes both visible matter and dark matter, appears to be dark matter. Much of its normal matter to enable star formation — primarily hydrogen gas — was likely stripped away by gravitational interactions with other galaxies inside the Perseus cluster.

Globular clusters possess immense stellar density and are gravitationally tightly bound. This makes the clusters more resistant to gravitational tidal disruption, and therefore reliable tracers of such ghostly galaxies.

As sky surveys expand with missions like Euclid, NASA’s upcoming Nancy Grace Roman Space Telescope, and the Vera C. Rubin Observatory, astronomers are increasingly turning to machine learning and statistical methods to sift through vast datasets.

The Hubble Space Telescope has been operating for more than three decades and continues to make ground-breaking discoveries that shape our fundamental understanding of the universe. Hubble is a project of international cooperation between NASA and ESA (European Space Agency). NASA’s Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope and mission operations. Lockheed Martin Space, based in Denver, also supports mission operations at Goddard. The Space Telescope Science Institute in Baltimore, which is operated by the Association of Universities for Research in Astronomy, conducts Hubble science operations for NASA.

Share

Details

Last Updated
Feb 18, 2026
Editor
Andrea Gianopoulos
Contact
Media

Claire Andreoli
NASA’s Goddard Space Flight Center
Greenbelt, Maryland
claire.andreoli@nasa.gov

Christine Pulliam
Space Telescope Science Institute
Baltimore, Maryland

Powered by WPeMatico

Get The Details…