Science & Technology - March 2022

Researchers have succeeded in isolating the entire genome of the dodo, a bird that went extinct in the 17th century

“Can we bring animals back from extinction?” In a panel discussion on this topic organised recently by the Royal Society, evolutionary biologist Beth Shapiro mentioned that if there were an animal she would like to bring back from extinction, it was the dodo. This follows on the fact that researchers at her lab at the University of California, Santa Cruz have succeeded in isolating the entire genome of the dodo.

What is the dodo, and why is it so interesting?

The dodo is a bird that lived in the Mauritius region and was last spotted 350 years back, in 1662. Since then it has become extinct. It would not be exaggerating to say that it is the very symbol of extinction. The phrase “dead as a dodo” is common in English to refer to something totally dead. The form of the bird has been revived from old drawings and the closest resemblance is in an Indian Mughal painting rediscovered in the Hermitage Museum in St Petersburg. In the painting it is slimmer and browner than in other descriptions. It is believed to be a more accurate depiction because it is pictured along with other birds which can be easily identified. The painting is by Mughal painter Ustad Mansur, probably commissioned by Emperor Jahangir who was famous for getting flora and fauna documented in paintings.

So, bringing the dodo alive would be the ultimate story of de-extinction.

How does the dodo look?

A species endemic to the island of Mauritius, the dodo is believed to be about 1 metre tall, flightless and weighed between 10 and 18 kilograms. Its real appearance is known only from paintings and drawings which vary a lot.

What animal is its closest living relative and how do we know that?

Beth Shapiro’s lab has sequenced the complete genome of the dodo, in work that is as yet unpublished, and she says that the closest living relative of the dodo is the Nicobar pigeon.

How does one recreate the full genome of an ancient, dead animal?

You need a specimen of the animal that has not been fossilized over the ages. Icy places like permafrost may contain remains of living beings in such a preserved form.

A small piece about the size of a finger-nail is taken from these specimens, and broken into small pieces. In an absolutely uncontaminated state, this is added to a PCR kit which multiplies the genome and makes many, many copies. From these fragments, the entire genome is pieced together, comparing it to genomes of other, living close relatives.

First the researchers in Shapiro’s team tried to do this from a piece taken from a specimen at Oxford. But it was not nearly large enough. Later they found a specimen in Denmark from which they were able to sequence the entire Genome.

Can the genome be used to resurrect an animal, In particular the dodo?

Beth Shapiro explained in the discussion that the known way of doing this would be to first stick in parts of the extinct animal’s genome into a framework provided by its close relative. For example, by inserting the mammoth genome into the elephant’s and construct a cell that contained sufficient amount of mammoth genome in it and then to clone it like Dolly the sheep. But while this process has been somewhat understood in the case of mammals, a new process has to be worked out for birds. “There are different groups that are working on this, and I have no doubt that we’ll get there, but this is a hurdle we face with birds,” she said.



Read in source website

The Hindu’s Science for All newsletters are carefully curated to help you understand everyday events as well as the wonders of the universe.

This article forms a part of the Science for All newsletter that takes the jargon out of science and puts the fun in! Subscribe now!

The M2 protein is considered to be a holy grail of designing a universal flu vaccine. The seasonal influenza strains mutate rapidly and new strains of the virus proliferate. This makes it very difficult to make a vaccine that can consistently generate a sufficient degree of immunity.

The M2e peptide is a section of the influenza virus that is conserved, meaning it doesn’t undergo too many mutations. Researchers have observed through the years that the M2e peptide region is pretty much unchanged across the several kinds of influenza A strains. Therefore, it is possible to design a vaccine that targets this peptide and prime the immune system to generate antibodies.

For this reason, M2e has for years been seen as a leading universal flu candidate. However, it has a limited ability to trigger a strong and long-lasting immune response and this has represented a major roadblock in its clinical development.

Recently researchers have reported a novel vaccine platform to deliver M2e to immune cells. By deploying this platform, a single shot vaccine containing M2e was able to trigger long-lasting immune responses that could protect effectively against multiple strains of the flu.

The team was also able to demonstrate that this vaccination approach significantly enhanced protective immune responses in the context of pre-existing flu immunity-- a situation particularly relevant in adult and elderly populations, where individuals have been exposed to flu viruses multiple times in the past and have low levels of M2e-specific antibodies in their blood circulation.

This vaccine approach has the potential to minimise the amount of M2e vaccine antigen (substance that triggers the body’s immune response against that itself) and the need for strong adjuvants (a substance which enhances the body’s immune response to an antigen), reducing potential side-effects, particularly in more vulnerable populations.

(If this newsletter was forwarded to you, you can subscribe to get it directly here.)

From the Science Page

T cell immune responses seen a year after infection

How mitochondria adapted to living within cells

Smoking causes over seven million deaths a year

Question Corner

What is the effecting of thawing permafrost on seafloor? Read the answer here

Of Flora and Fauna

How are mosquitoes able to avoid insect repellents?

Do looks correlate with caregiving in frogs and toads?

How do damaged plants warn neighbours about herbivore attacks?



Read in source website

What kind of microplastics were found in human blood in a recent study? Can these particles travel through the body?

The story so far: Microplastics are, as the name suggests, tiny particles of plastics found in various places — the oceans, the environment, and now in human blood. A study by researchers from The Netherlands (Heather A. Leslie et al, Environment International, Published online 24 March) has examined blood samples of 22 persons, all anonymous donors and healthy adults, and found plastic particles in 17 of them. A report on this work, published in The Guardian conveys that about half of these were PET (polyethylene tertraphthalate) plastics, which is used to make food grade bottles. The size of the particles that the group looked for was as small as about 700 nanometres (equal to 0.0007 millimetres). This is really small and it remains to be seen if there is a danger of such particles crossing the blood cell walls and affecting the organs. Also, a larger study needs to be conducted to firm up the present findings.

What are microplastics?

Microplastics are tiny bits of various types of plastic found in the environment. The name is used to differentiate them from “macroplastics” such as bottles and bags made of plastic. There is no universal agreement on the size that fits this bill — the U.S. NOAA (National Oceanic and Atmospheric Administration) and the European Chemical Agency define microplastic as less than 5mm in length. However, for the purposes of this study, since the authors were interested in measuring the quantities of plastic that can cross the membranes and diffuse into the body via the blood stream, the authors have an upper limit on the size of the particles as 0.0007 millimetre.

What were the plastics that the study looked for in the blood samples?

The study looked at the most commonly used plastic polymers. These were polyethylene tetraphthalate (PET), polyethylene (used in making plastic carry bags), polymers of styrene (used in food packaging), poly (methyl methylacrylate) and poly propylene. They found a presence of the first four types.

How was the study conducted?

In the study, blood from 22 adult healthy volunteers was collected anonymously, stored in vessels protected from contamination, and then analysed for its plastic content. The size of the bore in the needle served to filter out microplastics of a size greater than desired. This was compared against suitable blanks to rule out pre-existing plastic presence in the background.

What are the key results of this study?

The study found that 77% of tested people (17 of the 22 persons) carried various amounts of microplastics above the limit of quantification. In 50% of the samples, the researchers detected PET particles. In 36%, they found presence of polystyrene. 23% of polyethylene and 5% of poly(methyl methylacrylate) were also found. However, traces of poly propylene were not detected.

They found in each donor, on average, 1.6 microgram of plastic particles per milli litre of blood sample. They write in the paper that this can be interpreted as an estimate of what to expect in future studies. It is a helpful starting point for further development of analytical studies for human matrices research.

What is the significance of the study?

Making a human health risk assessment in relation to plastic particles is not easy, perhaps not even possible, due to the lack of data on exposure of people to plastics. In this sense, it is important to have studies like this one. The authors of the paper also remark that validated methods to detect the tiny (trace) amounts of extremely small-sized (less than 10 micrometre) plastic particles are lacking. Hence this study, which builds up a methods to check the same, is important. Owing to the small size of the participants, the study results cannot be taken as such to mould policy etc, but the power of this paper is in the method and in demonstrating that such a possibility of finding microplastics in the blood exists.

Does the presence of microplastics in blood have health impacts?

It is not yet clear if these microplastics can cross over from the blood stream to deposit in organs and cause diseases. The authors point out that the human placenta has shown to be permeable to tiny particles of polystyrene ( 50, 80 and 24 nanometre beads). Experiments on rats where its lungs were exposed to polystryrene spheres (20 nanometre) led to translocation of the nanoparticles to the placental and foetal tissue. Oral administration of microplastics in rats led to accumulation of these in the liver, kidney and gut.

Further studies have to be carried out to really assess the impact of plastics on humans.

THE GIST
Microplastics are tiny bits of plastic found in the environment in various places — the oceans, the environment, and now as per recent studies in human blood as well.
In the study, blood from 22 healthy volunteers was collected and analysed for its plastic content. It found that 77% of tested people (17 of the 22 persons) carried various amounts of microplastics above the limit of quantification.
It is not yet clear if these microplastics can cross over from the blood stream to deposit in organs and cause diseases.


Read in source website

The 45-day exhibition will be showcased from April 1 to May 15

Science Gallery Bengaluru’s new online exhibition, PSYCHE, seeks to explore the complexities of the human mind in socio-political and cultural contexts. The 45-day exhibition, in collaboration with National Institute of Mental Health and Neuro Sciences (NIMHANS), The Wellbeing Project and Museum Dr. Guislain, Ghent, will be showcased from April 1 to May 15.

Curated by the Science Gallery Bengaluru team, PSYCHE brings together philosophers, neuroscientists, artists, psychologists, filmmakers, sociologists, writers and performers. The exhibition will feature 10 exhibits, six films and over 40 live programmes including workshops, masterclasses and public lectures.

The exhibits trace the complexities of the mind. They are not all about research, however; they speak about the society as well. For instance, the audio-visual installation, ‘Black Men’s Minds’, rests upon the voices of black men who are often missing in conversations on mental health, trauma and stigma.

The exhibits will also feature interactive experiences such as ‘Playing with Reality’, based on the winner of the Best VR Immersive Work in 2019 at the Venice International Film Festival, which unravels what the phenomenon of psychosis can teach about the limits of reality; ‘The Serpent of A Thousand Coils’ gives participants of the game an empathetic insight into the minds of people with Obsessive Compulsive Disorder (OCD); another participatory web experience ‘Change My Mind’ helps understand the implication of brain implants on the mind.

‘Hamlets Live’ is a six-part performance that explores Hamlet’s inner monologues in a world that is strongly dictated by the real and hyperreal aspects of social media.

“In PSYCHE, we explore the human mind in a most unusual journey where we try to understand the mind with the help of our mind,” says Jahnavi Phalkey, the founding director of Science Gallery Bengaluru. “We pay close attention to both, the maladies as well as the health of our sentient selves. As always, we unpack objects of research inquiry across research disciplines at Science Gallery Bengaluru, to further our mandate of bridging the gap between research and the public.”

To register for the exhibition and get more information, visit psyche.scigalleryblr.org 



Read in source website

‘India’s space economy has evolved considerably and now accounts for about 0.23% of the GDP’

A collaboration between two premier research and educational institutions in Thiruvananthapuram has shed interesting light on India’s “space economy”, the exact contours of which have remained largely vague even as the country’s space programme grew by leaps and bounds.

In a first-of-its kind attempt at measuring the size of India's space economy, researchers from the Centre for Development Studies (CDS) and the Indian Institute of Space Science and Technology (IIST) arrived at a figure of ₹36,794 crore (approximately $5 billion) for the 2020-21 fiscal. The estimated size of India's space economy, as a percentage of the GDP, has slipped from 0.26% in 2011-12 to 0.19% in 2020-21, they found.

The findings, outlined in a paper 'The Space Economy of India: Its Size and Structure' by CDS director Sunil Mani; V. K. Dadhwal, till recently Director of IIST; and Shaijumon C. S., Associate Professor of Economics, IIST, were the subject of a webinar on Saturday.

By employing internationally-accepted frameworks, the authors have examined the annual budget for the space programme and its constituents; space manufacturing, operations and application. According to the paper, space applications accounted for the major chunk of this evolving economy, constituting 73.57% (₹ 27061 crore) of it in 2020-21, followed by space operations (₹ 8218.82 crore or 22.31%) and manufacturing (₹ 1515.59 crore or 4.12%).

The budget outlay for space has considerable influence on the dynamics of the space economy, according to the study. ''India's space economy has evolved considerably and now accounts, on an average, for about 0.23% of the GDP (over 2011-12 to 2020-21). We have also noticed a decline in the budget for space-related activities, leading to a reduction in the size of the economy in the last two years,'' prof. Mani said. The budget outlay in 2020-21 was ₹9,500 crore, shrinking from ₹13,033.2 crore in the previous fiscal. The estimated size of the space economy shrunk from ₹43,397 crore in 2018-19 to ₹39,802 crore in 2019-20 and ₹36,794 crore in 2020-21.

The study also found that the space budget as a percentage of the GDP slipped from 0.09% in 2000-01 to 0.05% in 2011-12, and has remained more or less at that level since then. In relation to GDP, India's spending is more than that of China, Germany, Italy and Japan, but less than the U.S. and Russia.

While it has limitations, the study nevertheless is a first-time attempt at scientifically measuring the size of the space economy, Dr. Shaijumon said. Prof. Mani cited the inability to establish the size of the space-based remote sensing industry as a drawback. ''The next step for us would be to look at the impact of space economy on the Indian economy itself. The impact is both direct and indirect,'' Prof. Mani said.

For the present study, the authors have relied on Indian Space Research Organisation (ISRO) and Parliament documents, the Comptroller and Auditor General’s (CAG) reports, data on intellectual property rights and other government data, in addition to Scopus-indexed space publications.

The CDS-IIST research project has coincided with the new Central government policies opening up the sector to private players. These policies, according to the authors, are very likely to enlarge the size of the sector through enhanced private investment and improved integration with the global private space industry.



Read in source website

India has about 12 crore smokers. This needs to be cut, in light of public health

As per the estimates of WHO (and the FDA of the US) 1.3 billion people (among the 7.9 billion across the world) who smoke, and 80% of them live in low and middle- income countries. Smoking is thus an epidemic and a great public health threat, killing over eight million people around the year. Over seven million of these people die due to direct tobacco use, and 1.2 million non-smokers who are exposed to second-hand smoke. And, as per the American Journal of Preventive Medicine, traditional cigarette smokers are 30% to 40% more likely than non-smokers to develop Type 2 diabetes.

A recent article by Dr. Smiljanic Stasha points out that 1) smoking causes over seven million deaths every year, 2) 5.6 million young Americans might die because of smoking 3) Second hand smoking causes 1.2 million deaths worldwide 4) Smoking is one of the world’s leading causes of impoverishment, and 5) In 2015, 7 out of 10 smokers (68%) reported that they wanted to quit completely. And a recent issue of Nature Medicine points out that after the WHO adopted the Framework Convention on Tobacco Control in 2003, it has been included as a Global Development Target in the 2030 Agenda for Sustainable Development (SD). If all the 155 signatory countries adopt smoking bans, health warnings, advertising bans and raise cigarette costs, this sustainable development is indeed possible.

The Indian Scenario

India has graduated from a low-income country into a developed country, and is estimated to have 120 million smokers (out of a population of 138 crores), or about 9% of Indian people. A material called Cannabis was prevalent in India and neighbouring countries. Cannabis is a plant product that was (and still is) known by the local names marijuana, charas, hashish, ganja, and bhang. The user feels ‘high’ upon consuming (smoking) it. The active principle in Cannabis is a psychoactive molecule called tetrahydrocannabinol, which is responsible for its psychoactive and intoxicating effects. Even today, during the annual Holi festival, people in India smoke ganja or bhang, to feel “high”.

Turning to tobacco

Turning to tobacco, its origin, use as a medicinal, ceremonial and intoxicant are extensively described by the Indian Council of Agricultural Research’s (ICAR) Central Tobacco Research Institute at Rajahmundry, Andhra Pradesh. The tobacco plant appears to have been cultivated in the Peruvian/Ecuadorean Andes in South America. The Spanish word for these intoxicating plants was ‘Tobacco’.

It appears that the Portuguese explorer Christopher Columbus, during his voyage to the Americas found the natives would sniff powdered these dry tobacco leaves through their noses and enjoyed it. In order to do so, they would use a hollow forked cane. Columbus did so too, enjoyed it and carried it over to Europe.

The ‘pipe’ used by the Europeans for enjoying tobacco appears to have its origin from the forked cane of the Red Indians. It was Columbus who introduced tobacco to Europe, and their colonies in India and South Asia. The Portuguese introduced tobacco cultivation in the north western districts of Gujarat, and the British colonials did the same in U. P., Bihar and Bengal.

The Imperial Agricultural Research Institute was established in 1903, and began research on the botanical and genetic studies of tobacco. In effect, then, tobacco plant and its intoxicating effects were not known to India until it was brought and cultivated by the westerners.

The active principle in tobacco is the molecule nicotine. It is named after Jean Nicot, who was a French Ambassador in Portugal. He sent tobacco seeds from Brazil to Paris in 1560. Nicotine was isolated from tobacco plants in 1858 by W. H. Posselt and K. L. Reimann of Germany, who believed it to be a poison, and that it is highly addictive unless it is used in slow-release form. (Is this why filter cigarettes are used- to bring the release slower?)

Louis Melsens described its chemical empirical formula in 1843, and its molecular structure was described by Adolf Pinner and Richard Wolffenstein in 1893. And it was first synthesized by Auguste Pictet and Crepieux in 1904. Recent research has also shown that regular smokers are prone to Type 2 diabetes.

Ban on tobacco products

India has about 12 crores of people who smoke. This needs to be drastically reduced, in light of public health. The Indian Ministry of Health is set to prohibit the sale of cigarettes. India has become a party to the WHO Framework Convention on Tobacco Control on February 27, 2005.

In accordance with this Framework and SD Goals, our Health Ministry has completely banned smoking in many public places and workplaces such as in healthcare, educational and government facilities, and in public transport. These are welcome moves and we the public must cooperate with them.

(dbala@lvpei.org)



Read in source website

A whaling ship from Massachusetts that sank near the mouth of the Mississippi River in 1836, was reportedly found in February

A whaling ship from Massachusetts sank near the mouth of the Mississippi River about 15 years before Herman Melville introduced the world to Moby Dick.

Nearly 190 years later, experts said, it was still the only whaler known to have gone down in the Gulf of Mexico, where the threat of enslavement at Southern ports posed a risk for Black and mixed-race men who often were part of whaling crews.

Researchers, checking out odd shapes during undersea scanning work on the sandy ocean floor, believed they had finally found the shipwreck about 113 kilometres offshore Pascagoula in Mississippi.

It was documented in February by remotely operated robots in about 1,829 metres of water.

Not much is left of the two-masted wooden brig thought to be Industry, a 65-foot-long (20-metre-long) whaler that foundered after a storm in 1836. An old news clipping found in a library showed its 15 or so crew members were rescued by another whaling ship and were returned home to Westport in Massachusetts, said researcher Jim Delgado of SEARCH Inc.

The discovery of Industry shows how whaling extended into a region where relatively little was known about whaling, despite the Gulf's extensive maritime history.

“The Gulf is an undersea museum of some incredibly well-preserved wrecks,” said Delgado of SEARCH Inc., who a few years ago helped identify the remains of the last known US slave ship, the Clotilda, in muddy river waters just north of Mobile in Alabama.

The find also sheds light on the way race and slavery became entangled in the nation's maritime economy, said historian Lee Blake, a descendant of Paul Cuffe, a prominent Black whaling captain who made at least two trips aboard the Industry.

Southern slave owners felt threatened by mixed-race ship crews coming into port, she said, so they tried to prevent enslaved people from seeing Whites, Blacks, Native Americans and others, all free and working together for equal pay.

“There were a whole series of regulations and laws so that if a crew came into a Southern port and there were a large number of mixed-raced or African American crew members on board, the ship was impounded and the crew members were taken into custody until it left,” said Blake, president of the New Bedford Historical Society in Massachusetts. Black crew members also could be abducted and enslaved, she said.

Images of Industry captured by NOAA Ocean Exploration aboard the research ship Okeanos Explorer show the outline of a ship along with anchors and metal and brick remnants of a stove-like contraption used to render oil from whale blubber at sea, elements Delgado described as key evidence that the wreck was a whaling vessel.

The Industry photos pale in comparison to those recently released of Endurance, which sank in 3,048 metres of frigid Antarctic water a century ago and is incredibly well preserved. Bottles believed to date to the early 1800s are visible around Industry, but no ship's nameplate; what appears to be modern fishing line lies near the metal tryworks used to produce oil from whale fat.

The Gulf was a rich hunting ground for sperm whales, which were especially valuable for the amount and quality of their oil, before the nation's whaling industry collapsed in the late 19th century, said Judith Lund, a whaling historian and former curator at the New Bedford Whaling Museum in Massachusetts.

“In the 1790s, there were more whales than they could pluck out of the Gulf of Mexico,” she said in an interview.

While at least 214 whaling voyages ventured into the Gulf, Lund said, ships from the Northeast rarely made extended port calls in Southern cities like New Orleans or Mobile, Alabama, because of the threat to crew members who weren't white.

That may have been a reason the whaling ship that rescued Industry's crew took the men back to Massachusetts, where slavery was outlawed in the 1780s, rather than landing in the South.

"The people who whaled in the Gulf of Mexico knew it was risky to go into those ports down there because they had mixed crews," said Lund.



Read in source website

Billions of years ago, a prokaryotic organism called archaea captured a bacterial endosymbiont

An organism that has been around from 2 billion years ago has given biologists from Centre for Cellular and Molecular Biology, Hyderabad (CCMB), a clue as to how mitochondria became an inseparable part of animal and plant cells. The researchers, led by Rajan Sankaranarayanan, identify two key transformations, one in the molecule known as DTD for short and another in the transfer-RNA (tRNA).

“Our lab works on a molecule called D-aminoacyl-tRNA deacylase (DTD). We observed some unexpected biochemistry of eukaryotic DTD that could be explained based on endosymbiotic origin of complex eukaryotic cell organelles.” Endosymbiosis is an intense form of symbiosis when one of the organisms is captured and internalized by the other.

Today, mitochondria are well known to be integral parts of the eukaryotic cell. They are dubbed the power houses of the cell, because they help in generating energy in the form of ATP within the cell, powering it. But they were not always part of the animal and plant cells. Once, about two billion years ago, a prokaryotic organism (without a nucleus) called archaea captured a bacterial cell. The bacterial cell learnt to live within the archaea as an endosymbiont. How this happened has been an important question among biologists. “In the late 19th century, microscopists observed that organelles like chloroplast [and later mitochondria] undergo division inside eukaryotic cells that resembles bacterial division, which led them to suspect that these organelles might have arisen from bacterial endosymbionts,” say Jotin Gogoi and Akshay Bhatnagar from CCMB and the first authors of the paper published in Science Advances.

Ancient organism

By studying an organism known as jakobid, which has been around since before animals and fungi branched off from plants and algae in the process of evolution, the researchers have identified two adjustments that had to take place to facilitate the integration of the two organisms. These adjustments were made in the process of optimisation when the two organisms merged together, evidently for compatibility. The researchers show that these changes, in a protein (DTD) and a tRNA (carrying an amino acid glycine for protein synthesis) are crucial for the successful emergence of mitochondria.

Amino acids come with two types of handedness – left-handed and right-handed. Accordingly, their names have a prefix of L or D. All life forms function with only the L-amino acids, in addition to achiral glycine, in proteins. Performing the role of a proofreader, the protein DTD removes D-amino acids from entering protein synthesis. Before it got incorporated into the eukaryotes, when it was part of the bacterial cell, DTD would not act on glycine which is essential for protein synthesis. This preference was changed so that it would be compatible with the eukaryotic cell. “Eukaryotic DTD has changed its recognition code preference in order to avoid untoward removal of glycine, which is a crucial ingredient required for the same. We show in our study that this switch in the recognition code is important without which DTD will be toxic to the eukaryotic cell,” says Rajan Sankaranarayanan of CCMB, who led the work and in whose lab the research was carried out.

Switch in base

The other change identified by the researchers is that mitochondrial tRNA(Gly) has changed its critical nucleotide base from U73 to A73, in order to be compatible with eukaryotic DTD. “This switch in the so-called discriminator base of mitochondrial tRNA(Gly) is important for avoiding removal of glycine and thus stopping protein synthesis in mitochondria – which can be toxic,” says Dr Sankaranarayanan. This means that before the change took place in the nucleotide base, glycine would be removed, which would have been toxic for the cell as protein synthesis would not take place without glycine.

Case of plant cells

Next, the researchers plan to investigate these evolutionary dynamics in plant cells. Plant cells have two DTDs and two organelles equipped with translation apparatus of their own. “The work for first time shows how such molecular optimisation strategies are essential, when derived from different ancestors like archaea and bacteria, for the successful emergence of mitochondria and hence all of eukaryotic life as we see today including humans,” says Dr Sankaranarayanan.



Read in source website

Strong and longstanding T cell responses were seen even when people were not reinfected or vaccinated

Like in most countries where the Omicron variant had become dominant and caused a high spike in daily cases, the third wave in India propelled by Omicron caused a large number of reinfections in unvaccinated people and breakthrough infections even among the fully vaccinated. However, across the world, the Omicron variant was found to cause only mild disease in fully vaccinated people and in those with previous infection. This was real-world proof that previous infection and/or full vaccination with two doses provide protection against progression of disease to a severe form.

Protective effect

Laboratory studies undertaken in all countries have only studied the neutralisation ability of sera of people who have recovered from COVID-19 and people who have been fully vaccinated. This could only shed light on the ability of past infection and/or vaccination to prevent infection by highly transmissive variants with immune escape. But no studies have been done to evaluate the protective effect of memory T cell immune responses against severe disease 12 months after primary infection. A new study from Wuhan addresses this gap. The results were published in the journal The Lancet Microbe.

Independent of severity

The researchers found that neutralising antibodies were detectable even 12 months after infection in “most individuals”, and it remained stable 6-12 months after initial infection in people younger than 60 years. The researchers found that “multifunctional T cell responses were detected for all SARS-CoV-2 viral proteins tested”.

And most importantly, the magnitude of T cell responses did not show any difference immaterial of how severe the disease was. While the ability of antibodies to neutralise was nearly absent against the Beta variant, it was reduced in the case of the Delta variant.

In contrast, the T cell immune responses were detectable in all the 141 individuals tested 12 months after infection and even when they had lost the neutralising antibody response. And the T cell responses were responding against the Beta variant in most of the 141 individuals.

Neutralising antibodies

“SARS-CoV-2-specific neutralising antibody and T cell responses were retained 12 months after initial infection. Neutralising antibodies to the D614G, Beta, and Delta were reduced compared with those for the original strain, and were diminished in general. Memory T cell responses to the original strain were not disrupted by new variants,” they write. “Our findings show that robust antibody and T cell immunity against SARS-CoV-2 is present in majority of recovered patients 12 months after moderate-to-critical infection.”

Robustness of T cells

The study reveals the durability and robustness of the T cell responses against variants, including Delta, even after one year of infection. Most importantly, the robust and longstanding T cell responses were seen in people who have not been reinfected or vaccinated. This would mean even in the absence of vaccination, a person who has been infected by the virus even one year ago would have robust immune responses, which would offer protection against disease progressing to a severe form requiring hospitalisation. But the neutralising antibodies were found to diminish at the end of 12 months.

It might be recalled that except the Oxford vaccine (AstraZeneca), none of the trials evaluated the ability of the vaccines to prevent infection. The endpoint of all vaccine efficacy studies was to evaluate if vaccinated people developed symptomatic disease or not.

Lack of studies

However, the booster doses aggressively pushed by vaccine manufacturers are for preventing infection. And even when the neutralising antibodies increase after a booster shot, they do drop after a few months. No studies have been done to evaluate if booster doses improve T cell immune responses, which is the most important criterion of vaccination.

In the case of neutralising antibodies, the researchers found that 121 (85.8%) were positive for neutralising antibodies at the 6 months while there was a slight reduction at the end of 12 months as only 115 (81.6%) were positive for neutralising antibodies.

The neutralising antibody titres did not show any difference based on disease severity — mild or moderate — or in those younger than 60 years. However, the neutralising antibody titres declined in older people and in people with critical disease.

Response to strains

A year after infection, 115 of 141 (82%) individuals had neutralising antibodies against the original strain from Wuhan, China. “In contrast, only 68 (48%) had neutralising antibodies against D614G, 32 (23%) had neutralising antibodies against the Beta variant, and 69 (49%) had neutralising antibody responses against the Delta variant”, they write.



Read in source website

A new study from Monterey Bay Aquarium Research Institute (MBARI) researchers and their collaborators have document how the thawing of permafrost, submerged underwater at the edge of the Arctic Ocean, is affecting the seafloor. The study was published in the Proceedings of the National Academy of Sciences.

Numerous peer-reviewed studies show that thawing permafrost creates unstable land which negatively impacts important Arctic infrastructure, such as roads, train tracks, buildings, and airports. This infrastructure is expensive to repair, and the impacts and costs are expected to continue increasing.

Using advanced underwater mapping technology, MBARI researchers and their collaborators revealed that dramatic changes are happening to the seafloor as a result of thawing permafrost. In some areas, deep sinkholes have formed, some larger than a city block of six-story buildings. In other areas, ice-filled hills called pingos have risen from the seafloor.

“We know that big changes are happening across the Arctic landscape, but this is the first time we've been able to deploy technology to see that changes are happening offshore too,” said Charlie Paull, a geologist at MBARI and one of the lead authors of the study, in a release. “This groundbreaking research has revealed how the thawing of submarine permafrost can be detected, and then monitored once baselines are established.”

While the degradation of terrestrial Arctic permafrost is attributed in part to increases in mean annual temperature from human-driven climate change, the changes the research team has documented on the seafloor associated with submarine permafrost derive from much older, slower climatic shifts related to our emergence from the last ice age. Similar changes appear to have been happening along the seaward edge of the former permafrost for thousands of years.



Read in source website

Half of the blood samples showed traces of PET plastic, widely used to make drink bottles

Scientists have discovered microplastics in human blood for the first time, warning that the ubiquitous particles could also be making their way into organs.

The tiny pieces of mostly invisible plastic have already been found almost everywhere else on Earth, from the deepest oceans to the highest mountains as well as in the air, soil and food chain.

A Dutch study published in the Environment International journal on Thursday examined blood samples from 22 anonymous, healthy volunteers and found microplastics in nearly 80% of them.

Half of the blood samples showed traces of PET plastic, widely used to make drink bottles, while more than a third had polystyrene, used for disposable food containers and many other products.

"This is the first time we have actually been able to detect and quantify" such microplastics in human blood, said Dick Vethaak, an ecotoxicologist at Vrije Universiteit Amsterdam.

"This is proof that we have plastics in our body — and we shouldn't," he told AFP, calling for further research to investigate how it could be impacting health.

"Where is it going in your body? Can it be eliminated? Excreted? Or is it retained in certain organs, accumulating maybe, or is it even able to pass the blood-brain barrier?"

The study said the microplastics could have entered the body by many routes: via air, water or food, but also in products such as particular toothpastes, lip glosses and tattoo ink.

"It is scientifically plausible that plastic particles may be transported to organs via the bloodstream," the study added.

Mr. Vethaak also said there could be other kinds of microplastics in blood his study did not pick up — for example, it could not detect particles larger than the diameter of the needle used to take the sample.

The study was funded by the Netherlands Organisation for Health Research and Development as well as Common Seas, a UK-based group aimed at reducing plastic pollution.

Alice Horton, anthropogenic contaminants scientist at Britain's National Oceanography Centre, said the study "unequivocally" proved there was microplastics in blood.

"This study contributes to the evidence that plastic particles have not just pervaded throughout the environment, but are pervading our bodies too," she told the Science Media Centre.

Fay Couceiro, reader in biogeochemistry and environmental pollution at the University of Portsmouth, said that despite the small sample size and lack of data on the exposure level of participants, she felt the study was "robust and will stand up to scrutiny".

She also called for further research.

"After all blood links all the organs of our body and if plastic is there, it could be anywhere in us."



Read in source website

On March 23, the Norwegian Academy of Science and Letters announced their decision to award this year’s Abel Prize to Dennis Parnell Sullivan, American mathematician who is now at the State University of New York, Stony Brook, U.S.. The Abel Prize is a top honour in mathematics, being similar to the Nobel prize for the sciences in being awarded for major contribution to a field of math. Named after the Norwegian mathematician Niels Henrik Abel, the prize was instituted by the Norwegian government in 2002. In this interview, Prof. Sullivan talks to The Hindu about his interest in mathematics, early influences and more. 

At what stage in your life did you perceive yourself to be a mathematician?

The second year of college, because I didn't know mathematics existed as a profession until then.

I was in chemical engineering [at Rice University, Texas]. But at that university, all the science students, electrical engineers, and everything took math, physics and chemistry. In the second year, when we did complex variables, one day, the Professor drew a picture of a kidney-shaped swimming pool, and a round swimming pool. And he said, you could deform this kidney-shaped swimming pool into the round one. At each point, the distortion is by scaling. A little triangle at this point goes to a similar triangle at the other point. At every point, that's true. We have a formula for the mapping, because we're taking calculus, and we had a notation for discussing it, which we have been studying. But this was like a geometric picture. This mapping was essentially unique. And this was, the nature of this statement was totally different from any math statement I've ever seen before. It was, like, general, deep, and wow! And true! So then, within a few weeks, I changed my major to math.

I was able to use that theorem in the 1980s. This was serious.

I used this wonderful structure in later research… especially, during a ten-year struggle proving mathematically, by 1990, a numerical universality discovered by physicists in the mid-1970s.

Could you tell me the name of the theorem that you proved?

Well I don't like names, I like the theorems though! (Laughs) No, no, I'm just kidding.

I proved something that physicists discovered, I use the theory behind it to prove something called the universality of the geometry of a certain dynamical process that involved renormalization, as in the physics use of the term in quantum field theory. It was sort of in that genre of ideas, but it was a it was a truly math statement. It could be formulated mathematically, and yet the physicists computed this.

The conceptual step in that proof, involved the idea of the Riemann mapping theorem. So proving that universality that was the conceptual part of the proof.

In fact, the theorem is true by experiment, in a whole continuum of situations, it's only the integer ones where it's been proven, because you have to use this idea from Riemann mapping theorem. And that idea doesn't work in the other cases, as far as we know.

You used to organise lectures by various mathematicians, where the format was to discuss the minute details regardless of the time taken. Do you still do that?

That was called the Einstein chair seminar. And it was, well, it was the regular format – you invited speakers, they would come and tell the stuff. But we didn't have a time limit. During an hour-long talk, you can stop the speaker a few times. You can't stop him all the time, you know, so it would be open-ended. Sometimes it would go, you know, more than three hours. In fact, the record is from 2.00 to 8.30. I think, finally, the guy had to have a beer. He was from Germany. (Laughs) He wanted to beat the record, though, and he did beat it.

Eventually, I would ask many of the questions, but then the students would start asking, too, because it was okay, and there's no such thing as a stupid question. That was that was the rule. No such thing as a stupid question. But it should be a precise question. That was the Einstein chair seminar. And it's still going on. But now it's more traditional format, although not always… And now we can do Zoom.

After Grigori Perelman's work, in 3D geometric topology have there been any major advancements? What is the field like after his results?

Well, I mean, I'm an outsider in that field. I'm very interested in it, but I'm not really an expert. It has been very active since then. It's because they now know how to describe in principle, all three dimensional manifolds. If you have any kind of a knot or a link, you can think of cutting out small neighborhoods from space around that knot or link, and then you're left with a three dimensional manifold, you break it up into geometric pieces – the [William] Thurston picture. So it's like when you have a linear transformation matrix and you know its eigenvalues, you know a lot about it. Right. They have something like that now, for knots. It opened up the possibility of proving many things with the basic Poincare group, or fundamental group of every three manifold, using group theoretic properties. And this is interesting, because already in dimension 4, 5, 6, and so on, the possible groups you can get are everything. One knows this is from the 50s, that it's a logically undecidable question. In dimension three, the groups that appear are not arbitrary. They're very rich, and very structured, and Perelman’s, proof of the Thurston picture gives you an opening to it. Thurston, proved a lot of it [but] didn't prove the whole thing. But this step gives you a way to analyse the groups in three dimensional spaces. It's been very active, one of the Breakthrough prizes, for Ian Agol, was based on what he did about these groups after Thurston’s picture and Perelman’s proof. It allowed many breakthroughs, in my opinion, but I'm not really a bonafide expert. Okay? I've been watching it, though. All this time.

I've heard that you have been interested in the Navier Stokes equation for a long time. Can you tell us about how you got interested in it? (The Navier Stokes equation is now counted as one of the seven millennium problems as listed by the Clay Institute. Of the seven only the Poincare Conjecture has been proved, by Grigori Perelman.)

First of all, it's related to being a chemical engineer. If you're in Texas, as a student of chemical engineering, that is, there's the petrochemical industry, the oil industry, and organic chemistry and plastics, all around Houston. If you are good in science, and you work on that and become an engineer you can get a good job and have a nice work at a research center. So it's a good thing to do. In fact, during the summers, I had jobs at various such places. Once I had to study the computer methods that they were using to do what's called secondary recovery. You know, when they find oil, because of the pressure, if they drill a hole, the pressure makes it shoot up, right. But after they drill for 20 years, the pressure goes down. What they do then is go to another part of the field, and they drill and they pump in water to create pressure that will push the oil back to their wells, and for this they have to solve the linearised version of the Navier Stokes equation. I didn't know that name, then but it's a linearised version of the Navier Stokes equation. While at the summer job where I was studying the possible computer programs I had a certain question there. That was around 1960. And that was related to how they would place their wells for getting to secondary recovery.

Moira [Moira Chas] and I were visiting Saudi Arabia 35 years later, and I went to this company, Aramco. This is a big, huge company... and they were studying the same problem from 35 years before.

So in a sense, I was aware that there's this huge industry related to fluid flow through porous media. It was astonishing to me to find out as I found out in the 1990s, that that equations in three dimensions, the beautiful equations, are not solved.

And then later, in 2000, that became one of the millennium problems. There are these famous seven problems. The only one that's been solved is by Perelman [Poincare conjecture], the one that you just referred to.

I got an idea. I had an idea that maybe the idea of calculus and expression in terms of partial differential equations is a little too presumptuous. Namely, you have this physical process, which we know is atomic – it's made out of particles. But you see these smooth flows everywhere. So you say, okay, let's model this, like with Newton's calculus, right? Then you find a differential equation, like in every physics course, you would make a little box and put dx, dy and put force and write something and then take the limit as the box goes to zero to get the equation.

Well, that has worked like a charm for many problems, right? You get a beautiful equation here. I love the equation, because it had a geometric meaning that I understood, but it hasn't worked!

I thought about it. Maybe you've done things out of order. First, you imagine the fluid, and you take this calculus limit, and get a beautiful equation. Then when you want to put it on a computer, what do you do? You go backwards. You say, you can't put an infinite formula on the computer, and you can't put derivatives. Right? Instead of derivatives, you put f (x + h) - f (x) / h. So you put that on the computer, and then you crank away?

Well, here's what you started with! You went to this ideal continuum, made this PDE. Then you take the PDE out, and you do a discrete process. You're going in and out. So I thought why don't I just go this way.

And there's one precedent for this when you study the Laplacian it is called the heat equation. If you just have a conducting medium and you put some heat down, it spreads out like the Gaussian, the heat spreads out. But that formula can be deduced from putting discrete little dots equally spaced and think a particle of heat spreads out with probability half that it goes this way, probability half that goes that way. And then you write that coin flipping process. And then it turns out, you see things in the discrete approximation that allow you to make sense of this equation in a much more general way, it gives you a great advantage as a theory of Markov processes, Kolmogorov’s work, all that stuff comes from this probabilistic viewpoint. So I could hope that if you did a discrete process, there might be some nonlinear version of something new that you would see, that would help you.

I've been trying for 30 years. And I don't have any theorems to show for it. But I'm understanding more and more yet.

How do you choose a problem to work on?

I usually find I want to think about [a question] from the beginning, I want to understand it. That's all I want - to try to understand it. Sometimes it solves problems, it’s not like I choose a problem. I want to understand an area. Yeah.

I've seen instances where, when a story kind of comes to equilibrium. And it's like the perfect answer. If you actually look at that answer. And then you read backwards through the whole history of ideas. You'll see [that] way back at the beginning, if they had looked at it, not the way it went, but they looked at it, turned slightly and went this way, they would have gone directly to the answer,.

You can sort of forget a lot of the false starts, then make it very simple another way, say, you can make a very simple picture, assume this, prove this this way. And there's one idea here, this is one picture, and then some easy stuff. That's often the way math stories are, from beginning to end. That's not exactly the history.

Mathematics, when it's done and perfect, is absolutely perfect and simple. If things are not simple, then they're not done.

It turns out that the real great things have the property that the steps get submerged into the definitions, and they get back to teaching them to the undergraduates and eventually are teaching in high school. Euclid’s geometry, it's high school stuff, Right? But you know, deep, so, everything should be simple and basic.

Do you see math in everything around you?

Oh, yeah. Well, there's a blessing and a curse to that. Because, you know, the beautiful thing about math, which is really one of the most powerful aspects of it is that the concepts can be perfectly clear. There's only one point related to Godel’s theorem, where there's some ambiguity, we start from some simple notion, which is called a collection of objects. That notion has properties and one has to assume that, and then, if you assume that, with those properties, mathematics begins. Relative to that assumption, it’s perfectly precise.

Math has this potential clarity of concept. You'd never see mathematicians arguing about a math statement. They can agree very quickly that they're talking about the same thing. Then, if one of them thinks it is true, then, “Oh, do you have a proof?” Either he has a proof or he doesn’t. If he doesn't have a proof? We agree on the statement, but it becomes unknown. He doesn't say “Well, I think it's true.” You know, that has no meaning. Or [if he says] “I don't think that's true.” Then it is, “Well, do you have a counterexample?” If the answer is “No, but I don't think it's true,” then I don't care. And they agree, they don't get mad at each other because those are the rules of the game. Because the concepts are precise. This is remarkable. Among all the other sciences, there's nothing like this.

Now, that precision, though, is sort of a curse, because even when you're talking to people, almost everything that's said is not precise, because there 're tacit assumptions.

I'm too literal. My wife makes jokes with me every three minutes, and I take them seriously, for example.

But then the other thing, the positive side of the question is that I like to talk to six-year-olds about math, because they're like little mathematicians. They want to know how big numbers are, how big space is, and I like to see the picture in a proof. If you have a picture that shows the essence of proof, you could show it to the child.

It’s just natural to see patterns. But then there is this thing about precision, of language, which is sort of an inconvenience in some ways, although it's a blessing in mathematics itself.

Yeah, so I usually approach things in this mathematical way, a little more than I should sometimes. And I think many difficult math things often have a little picture that can be shared with somebody. In Fact, even Hilbert said, If what you're doing can't be explained to the common man, you don't understand it. He said that when you meet this the person on the street, you don't need formulas. That's why I hate names and notation. Because that allows you to pretend you know what you're talking about. When you don't necessarily know what you're talking about. You have a lot of jargon.

What are your thoughts on the coexistence of faith and science?

Well, I think I kind of replaced my spirituality and Catholicism with mathematics. You want to know what you can know? Right? What can you know to be true? Math is a pretty good, unfortunately, it only deals with very simple questions. You know, psychology, physics deals with the nature of the universe. Mathematics deals with physics. There is something remarkable, and unexplained in the universe we live in, and also mathematics itself. I believe mathematics would be the same. If there's life on other planets, I think they might have discovered different parts and gone to some different direction. Like, if we just were doing computer science, you would sort of emphasize more on graph theory and combinatorics, and algorithms. But you know, they might not have done Lie groups yet. Those are all sort of primitive aspects of your question. But if you want to know what's true, then math is a pretty good place to start establishing what it means to know something. In math, we sort of had this certain point, we don't know any absolute fact, in some sense, unless it involves finite systems, but anything that involves something like calculus with an infinite system can only be rigorous and known to be true relative to this basic starting point, I mentioned about set theory, you have to assume there are sets of points... Then you can build the numbers and build the real, you can build the integers. You can build the numbers, and you can build the continuum, then you can build spaces and Lie groups and the rest of mathematics, but it's all relative to this assumption at the beginning. But that's knowing something, you know! If this is consistent, then all of this is consistent, and this is very simple and very believable. So that's the kind of religion in a way, the mathematicians believe that these systems, this basis is okay. They're willing to spend their lives working on that. So, that's almost religion, right?

Can you tell me something about your experience in India? You've been there a couple of times at least.

I think my first visit was to Chennai. Which I had trouble finding because I knew it as Madras. I remember. I was trying to book a plane ticket to Madras and I had trouble getting there. Let's see. If I just think back about it. I remember the cows in the street in Chennai, and the cars and everyone being together. There's no problem.

I also learned that vegetarian food could be delicious. Well, I've had a lot of Indian graduate students, so I kind of know them. I know Indian people.

Do you have a message for the readers?

I could say something that I say to my graduate students: critical thinking is important. It's good to think critically, examine your beliefs, understand why they're commonly held, and then maybe, in certain circumstances, you have to modify them slightly, to make them work better. That's what has helped me understand mathematics better. For example, even what you learned from your masters, sometimes, it’s their perspective. Having a perspective is excellent, which is kind of like a bias. It is good because it makes you more effective and you can put your energy in those directions, right? But then sometimes, it's not right. In some situations or some points of view, there's a different way to look at it. And this may help you make progress in a direction that was blocked with previous perspective. This is not [being] critical in the sense of [being] negative, it's critical in the sense of examining. So critical thinking is, and I'm borrowing this from a wonderful interview of Bertrand Russell in 1952, He says a lot of very charming and very intelligent things, but he also emphasises this point that when you have a perspective, it sometimes allows you to make irrational rational decisions. you know, So it's good to be critical, even of your own beliefs because it helps. That works in math too.



Read in source website

One of Abel prize winner Dennis Parnell Sullivan’s key breakthroughs is in developing a new way of understanding rational homotopy theory, a subfield of algebraic topology

The Norwegian Academy of Science and Letters has awarded the Abel prize for the year 2022 to American Mathematician Dennis Parnell Sullivan, who is with the Graduate School and University Center of the City University of New York and the State University of New York at Stony Brook. The citation mentions that the award has been given, “For his groundbreaking contributions to topology in its broadest sense, and in particular its algebraic, geometric and dynamical aspects.”

Topology is a field of mathematics which was born in the nineteenth century and has to do with properties of surfaces that do not change when they are deformed. Topologically, a circle and a square are the same; similarly, surfaces of a doughnut and a coffee mug with one handle are topologically equivalent, however the surface of a sphere and a coffee mug are not equivalent.

Early influences

This mapping was what excited Dennis P. Sullivan when he was in the second year of a Chemical Engineering course at Rice University, Texas. “The epiphany for me was watching the professor explaining that any surface topologically like a balloon, and no matter what shape - a banana or the statue of David by Michelangelo could be placed on to a perfectly round sphere so that the stretching or squeezing. required at each and every point is the same in all directions at each such point,” he said. Further the correspondence was unique once the location of three points was specified and these points could be specified arbitrarily… “This was general , deep and absolutely beautiful,” he recalls. He at once changed his major to take up mathematics, which became his lifelong interest.

He was so struck by this concept that he used it in later research, especially during a 10-year struggle with a proving mathematically, by 1990, a numerical Universality discovered by physicists in the mid-1970s.

Changing the landscape, building bridges

“Dennis P. Sullivan has repeatedly changed the landscape of topology by introducing new concepts, proving landmark theorems, answering old conjectures and formulating new problems that have driven the field forwards,” says Hans Munthe-Kaas, chair of the Abel Committee, in a press release given by the Academy.

The release further says that Prof. Sullivan has found deep connections between a variety of areas of mathematics. One of his key breakthroughs is in developing a new way of understanding rational homotopy theory, a subfield of algebraic topology. Later, in the late 1970s, he started working on dynamical systems, a field considered far removed from algebraic topology. Dynamical systems is the study of a point moving in geometrical space.

In 1999, he and his wife and collaborator, Moira Chas, discovered a new invariant for a manifold based on loops, creating the field of string topology.

Dennis P. Sullivan has won numerous awards, among them the Steele Prize, the 2010 Wolf Prize in Mathematics and the 2014 Balzan Prize for Mathematics. He is also a fellow of the American Mathematical Society.



Read in source website

The Fields Medal and the Abel Prize are the two important international prizes for mathematics

The Norwegian Academy of Science and Letters will announce the winner of the 2022 Abel Prize, a top honour in the field of mathematics, today at about 4.30 pm IST.

The Abel Prize is named after the Norwegian mathematician Niels Henrik Abel. The prize was instituted in 2002, to commemorate his 200-th birth anniversary.

The Fields Medal and the Abel Prize are the two important international prizes for mathematics. While the Fields Medal honours brilliant work done by a mathematician below the age of forty years, the Abel Prize has no age limit and is more of a lifetime achievement award celebrating important contributions made to a field of mathematics.

The first Abel Prize, awarded in 2003, went to French mathematician Jean-Pierre Serre. The only person of Indian origin to have won this prize is Srinivasa S.R. Varadhan. He is at the Courant Institute, New York University, and won it in 2007. So far, the prize has gone to only one woman mathematician, Karen Keskulla Uhlenbeck of University of Texas, U.S.A.

The prize consists of a citation and a prize money of 7.5 million Norwegian Kroner.

Norway’s Ramanujan

A short film on the tragic story of mathematician Niels Henrik Abel is available on the website of the Abel Prize. Indian math afficionados will see the parallel to the life of Ramanujan.

Abel was a young genius who, when he was just 22, showed the unsolvability of the quintic equation, which had puzzled mathematicians for 250 years. In 1826, he presented an important theorem in Paris; however the manuscript was misplaced. While trying to derive his lost treatise, he contracted tuberculosis and died three years later on April 6, 1829, when he was only 26 years old. Just two days after his death, the Paris treatise was discovered. Now, his discovery forms the mathematical basis for the CT scan. His work is also used today in ECC-cryptography, used for encrypting data online.



Read in source website

The rate of evolution can match the rate of ecological change

S.M. Rudman et al, Science 375, eabj7484 (2022), DOI: 10.1126/science.abj7484

It is a common belief that evolutionary changes take place at a much slower pace than ecological change. However, this is not completely true, and there is now mounting evidence to the contrary albeit under certain circumstances. In this line of thought, an experimental study of Drosophila melanogaster, or fruit fly, has shown that the pace of adaptation can match that of environmental and seasonal changes. The work, published in Science, studies adaptive tracking, which is defined as continuous adaptation in response to rapid environmental change. Adaptive tracking is known to be the critical mechanism by which living beings continue to thrive in a changing environment, but little is known about the pace, extent and magnitude of adaptive tracking in response to a changing environment.

Preferred model

Rudman et al conducted experiments on 10 independent replicate populations of Drosophila melanogaster, each being a group of up to 1,00,000 individuals. The flies were kept in boxes 2m X 2m X 2m in size and placed near a dwarf peach tree, outdoors, in Philadelphia.

The fruit fly is the preferred animal model for many experimental studies of evolution because it is relatively easy to breed and maintain, and multiplies rapidly, allowing many generations to be studied in a short time. The researchers measured the evolution of heritable and observable physical characteristics over time. These could be stress tolerance such as survival under cold, hunger and dessication or related to fitness or reproductive traits such as developmental rate and egg-laying, respectively.

The flies were exposed to a changing season from July-December in 2014, and monthly measurements were made. The authors write that they focussed on generating highly accurate measurements, taking these measurements on a time-scale matching that of environmental change and collecting measurements from 10 independent populations. Taking measurements from parallel lineages of fruit fly and finding similar changes in the populations which did not themselves interact ensured that the changes were indeed due to selection and not due to random inherited factors.

There was an interesting observation, as pointed out in a Perspective piece about the work in Science, written by Ary H. Hoffmann and Thomas Flatt, that the rate of the phenotypic (observable, physical characteristics of an organism) evolution varied according to the trait. While chill coma recovery, a marker of resistance to cold, increased as winter progressed, dessication resistance increased, plateaued and then decreased. But overall, the rates of evolution of these traits was rapid and matched the requirements of adaptive tracking.

While the paper establishes that fly populations can rapidly adapt to seasonal changes, the authors of the Perspective article remark that this may perhaps be anticipated theoretically because fruit fly populations are large, with huge genetic variation, which makes the effects of rare mutations on the evolution minimal. This means even a weak selection could be sufficient to drive a fast evolution. However, they point out that while theories suggest that adaptive tracking may be hindered by factors such as the reduction in fitness due to the lag in adaptation after an environmental change, this is not observed in the real drosophila populations.

Evolutionary rates underestimated

The researchers conclude that this experiment demonstrates how you can see adaptive tracking in response to environmental change in real time by observing multiple parallelly evolving populations. The action of fluctuating selection implies that evolutionary rates may have been underestimated and that fluctuating selection may play a role in maintaining biodiversity. They write: “Determining whether adaptive tracing is a general feature of natural populations, and elucidating the mechanisms by which it occurs, can be transformative for understanding the generation and maintenance of biodiversity.”



Read in source website

India’s ‘Arctic Policy’ document was unveiled recently

India aspires to have a permanent presence, more research stations and establish satellite ground stations in the Arctic region, suggests a perusal of its ‘Arctic Policy’ document that was officially unveiled last week.

India presently has a single station, Himadri, in Ny-Alesund, Svalbard, a Norwegian archipelago, where research personnel are usually present for 180 days. India is in the process of procuring an ice-breaker research vessel that can navigate the region.

Through its existing network of satellites, India aspires to capture more detailed images to “assist in the development of the Arctic region.”

The Arctic has eight states— Canada, Denmark, Finland, Norway, Iceland, Russia, Sweden and the United States — that comprise the Arctic Council. It is home to about 4 million, a tenth of them being indigenous tribes. India has had a research base in the region since 2008 and also has two observatories.

Arctic weather influences the Indian monsoon and hence has been of interest to Indian researchers for decades. Climate change and the melting of ice caps imply changes to the Arctic weather.

India has so far sent 13 expeditions to the Arctic since 2007 and runs 23 active science projects. About 25 institutes and universities are currently involved in Arctic research in India and close to a hundred peer-reviewed papers have been published on Arctic issues since 2007, the Ministry of Science and Technology said in a statement.

Arctic Council

It has the status of ‘Observer’ member — 12 other countries have such a status — in the Arctic Council and participates in several meetings that are mostly themed around research.

Beyond science, India also expects business opportunities.

“Explore opportunities for responsible exploration of natural resources and minerals in the Arctic...identify opportunities for investment in Arctic infrastructure such as offshore exploration, mining, ports, railways, information technology and airports. It also expects Indian private industry to invest in the establishment and improvement of such infrastructure,” says the document.



Read in source website

The Bihar-born Harvard physician will lead America’s fight against the virus

In early 2020, when the U.S. was confronting the first wave of COVID-19, with its health system starting to feel the strain, a Harvard physician had said on a morning news show: “We can either have a national quarantine now, two weeks, get a grip on where things are, and then reassess, or we can not, wait another week, and when things look really terrible, be forced into it.”

Almost two years later, the public health expert is saying, “pulling back on mask mandates for now is very reasonable,” as he advises keeping testing rates high for a possible new surge. The man is Indian-origin physician Dr. Ashish Jha, U.S. President Joe Biden’s newly-appointed COVID-19 response coordinator, who will replace Jeffrey D. Zients, a management consultant.

Dr. Jha’s appointment comes at a time when America is entering a new phase of the pandemic. After successive waves tested the limits of the country’s public health system and claimed over a million lives, COVID-19 numbers are now in decline. As three quarters of the country's population are vaccinated at least once, and as it closely watches for the threat of the new BA.2 variant driving a surge in Europe, Dr. Jha’s appointment signifies a strategic shift in the country’s pandemic response.

The Dean of Brown University’s School of Public Health, Dr. Jha brings to the table almost two decades of experience in public health and health policy research. Before assuming his current position at Brown in 2020, he was the faculty director of the Harvard Global Health Institute and professor of global health at the Harvard T.H. Chan School of Public Health. As a doctor of internal medicine, he has practised in Massachusetts and Providence.

Born in Pursaulia, Bihar, in 1970, he first moved to Canada in 1979 and then to the U.S. in 1983. An economics graduate, Dr. Jha, before pursuing internal medicine training at the University of California, got his M.D. from Harvard Medical School in 1997. In 2004, he earned his masters of public health from the Harvard T.H. Chan School of Public Health.

He has to his name “groundbreaking” research on Ebola and jointly heading West Africa’s strategy to tackle the outbreak of the disease in 2014. Dr. Jha’s academic research, of over 200 empirical papers, focuses on enhancing the quality of healthcare systems and how national policies impact healthcare. He has studied extensively how state funds for health can be utilised for efficiency.

‘Everyman expert’

Offering straightforward yet measured advice at almost every stage of the pandemic on multiple television shows and through clear and comprehensive Twitter threads, Dr. Jha became “America’s everyman expert on Covid-19,” according to health news site STAT.

For policy makers, his data-centric responses, backed by practical public health experience, became guiding threads for weaving pandemic response strategies. He has also testified in two U.S. Congressional hearings on the COVID-19 vaccine rollout and on the global impact of the pandemic.

Despite sounding warning bells and calling for caution, Dr. Jha has had an optimistic outlook of the pandemic. In May 2021, for instance, while talking about herd immunity, Dr. Jha wrote in a Twitter thread: “We won't be done even if we get to 80% (herd immunity). We’ll need to monitor variants, vaccinate the world, continue testing, etc. But this is all manageable. We'll settle into a new equilibrium as we do with many viruses And COVID won't dominate our lives. And that's what matters.”

Even when the Biden administration took over to steer and fix the previous administration’s response, Dr. Jha had said rejoining the WHO and reviving the U.S.’ leading role in it won’t solve things. He emphasised on how pandemics are truly global in the 21st century, and the U.S. or the West cannot just ‘lead’ alone in global health, but work with international partners, take cross-border health measures and exchange knowledge to “decolonise” global health.

While thanking Mr. Biden for appointing him as the new pandemic response chief, Dr. Jha said: “To the American people, I promise I will be straightforward and clear in sharing what we know, in explaining what we don’t know and how we will learn more, and what the future will ask of all of us.”



Read in source website

In what they call surprise findings, Johns Hopkins Medicine scientists report that unlike fruit flies, mosquitoes' odour sensing nerve cells shut down when those cells are forced to produce odour-related proteins, or receptors, on the surface of the cell. This "expression" process apparently makes the bugs able to ignore common insect repellents.

In contrast, when odour sensors in fruit flies are forced to express odour receptors, it prompts flight from some smelly situations. So the researchers designed their research project suspecting they'd find that mosquitoes have the same reaction as fruit flies when their new odour sensors are forced to be expressed.

The researchers then tested this on female Anopheles mosquitoes. The idea was that if researchers could push mosquito odour neurons into a similar expression state, triggered by odorants already on the skin, the mosquitoes would avoid the scent and fly off. But they found that the mosquitoes had very little response to common animal scents, benzaldehyde and indole, as well as chemical odorants in general, says a press release.

The researchers tested how mosquitoes modified to overexpress an odour receptor that responded to odorants in common insect repellents, such as lemongrass. They found that the genetically modified mosquitoes were able to ignore insect repellents.

The researchers suspect that the odour receptor shutdown may be a kind of failsafe in mosquitoes, ensuring that only one type of odorant receptor is expressed at a single time. Mosquitoes have been found to be trickier than initially thought.



Read in source website

Lockdowns can only transiently protect the susceptible pool of people and postpone the waves

Following a long period of Zero COVID tolerance, China and Hong Kong are witnessing faster and widely spreading waves compared to earlier waves. Despite the current lower death rate, these countries may face a transient shortage of medical resources in the short term.

Any wave in any part of the world occurs when the susceptible pool of uninfected people crosses a population-level threshold. This is a defining feature of any infectious disease outbreak. Almost every wave may be characterised as a state of disequilibrium caused by the interaction of three elements of the epidemiological triad. Understanding the virus (agent), the human (immune) response (host), and environmental factors is required to comprehend how susceptible pools are formed. The agent (SARS-CoV-2) is distinguished by the emergence of newer variants, whereas at the host level, the duration of protection provided by antibodies dwindles over time. Compliance with mask and crowding restrictions, as well as proactive actions implemented during waves are examples of environmental influences.

In China and Hong Kong, imposing severe restrictions such as lockdown changed only the environmental factors, transiently protecting the susceptible pool and eventually postponing the waves. Air cannot be shut down; the virus has an unrestricted global pass disregarding geographical boundaries. With Omicron and its stealth form, any person with an infectious variant, once connected to a susceptible pool can cause massive outbreaks. South Korea, which was initially lauded for its better control, is reeling under pressure, with hospital bed occupancy reaching 64%, despite 63% of the country’s 52 million population receiving booster shots. China, on the other hand, has achieved 38% booster dosage coverage and is at the tipping point, to witness significant number of infections, when it has to open up eventually.

Several factors influence the natural course of the COVID-19 pandemic. Among them, the action taken in the first and subsequent waves play a significant role. Velásquez et al from the Universidad Nacional de San Agustín de Arequipa, Perú, use the example of three countries that experienced identical illness transmission dynamics during the first month, then differed due to government policies such as quarantine, closed borders, and other restrictions. As a result, rather than local strategies, a viable exit strategy for COVID-19 should take the form of a global control program. A few countries that tried Zero COVID policies locally (New Zealand, Australia, and now China) were unsuccessful and will be compelled to abandon this plan.

Lessons for India

The goalposts for attaining herd immunity have shifted with COVID-19. The fallacy of herd immunity is perpetuated by an incorrect characterisation of a similar strategy against stable viruses, as well as the belief in long-lasting immunity, both of which have been disproven. A ‘wave’ arises when there is a substantial enough number of cases in a specific period.

The lesson from ongoing waves is that the Omicron variant is unstoppable, both in its original and modified forms, and is destined to spread to nearly every part of the planet. China’s Zero COVID approach and South Korea’s early success can no longer deter its spread. Is that a good sign? Uncertainty is the only certainty at this point, depending on what happens next in the virus’s evolution. It would be a relief if this is the last of the virus’s iteration, attuned to cohabit with humans. Alternative trajectories in viral evolution are equally likely, if not more probable, to result in either a more contagious or virulent newer version.

The problem in identifying future waves or other infectious disease outbreaks is that some areas do not have robust surveillance systems. Without knowing the expected number of cases of an illness, predicting the extra cases, which constitute a wave, is difficult. Areas with missed circulation provide a fertile ecosystem for the development of newer variants. More than ever, creating and strengthening surveillance systems to identify and tackle future disease outbreaks are essential lifesaving investments. Creating such systems must include enhanced spending by States in hiring and sustaining trained manpower, who can manage decentralised epidemiological and genomic surveillance programmes, using standard definitions and processes.

Monitoring the virus

Data from such this system can provide an expected and excess number of cases, to warrant initiating appropriate actions. Monitoring the virus and host-related factors over time necessitate the need of enhanced use of epidemiological tools enabled by strengthened and sustained efforts in syndromic and genomic surveillance, conducting regular serosurveys or establishing sentinel-based serosurveillance platforms. Given the uncertainty around how the virus will adopt, what is in our control is to track each of the constitution of epidemiological triad, and act proactively and early enough. We cannot afford to blame the new variant next time, not if, but when there is one.

(Giridhara R. Babu is a Professor and Head, Life course epidemiology at the Indian Institute of Public Health (PHFI), Bengaluru.)



Read in source website

Anurans species where males looked similar to females were likely to care for the young

Parental caregiving in animals is associated with an element of risk of being attacked by predators. On another track, body colours and patterns are believed to have evolved because they protect the animal against attacks by the predator. Tying these two facets together, researchers from Indian Institute of Science asked whether parental caregiving is associated with body colour, patterns and even dichromatism in frog and toad species. Dichromatism is when the two sexes have different colouration, at least in parts. The results of their study and analysis are published online in the journal Evolution.

Frog parents

Parental caregiving in frogs and toads has many associated questions: Does the species provide care at all, or not? If they do, which of the parents (male or female or both) provides the care? Since frogs and toads, clubbed together as anurans, are amphibians, the question of whether they provide care in the water or land becomes relevant. The study seeks to find if the listed aspects of care are correlated with the way the animals look.

Starting from a list of approximately 1,200 species, the researchers narrowed down the study to 988 species, which they proceeded to analyse. “We found that species that show parental care were more likely to be non-dichromatic species, which means that males and females look similar. This pattern is independent of whether the male or the female cares for the offspring,” says Maria Thaker in whose lab at the Centre for Ecological Sciences, IISc, the study was carried out, in an email to The Hindu.

Minimising risk

Dr. Thaker explains what may be the reason behind this: “When one sex of a dichromatic species is brightly coloured or patterned to attract mates, for example, that sex is more conspicuous to potential predators as well. It’s dangerous to be conspicuous. This is why we predicted that the evolution of caregiving should coincide with the evolution of minimising the risk of unwanted attention, and hence monochromatic species are more likely to show caregiving.”

Dorsal colours were independent of the occurrence of care. “So, whether a species cares for their offspring or not has no bearing on what colour they are. This contradicted our expectation that providing care is dangerous and therefore caregivers should be camouflaged or aposematic,” she adds. Being aposematic refers to being brightly coloured to warn off predators. This gives the indication to predators that the animal may be poisonous to consume.

“The presence of dorsal stripes was significantly correlated with species where males alone cared, but none of the other five pattern categories were significantly correlated (Plain, Bands, Spots, Mottled-Patches),” says K. S. Seshadri, who is a DST-INSPIRE Faculty Fellow in Dr. Thaker’s lab. “In species where females alone care, none of the colours and patterns were correlated with the occurrence of care.”

Dr. Seshadri adds that perhaps the presence of stripes provides the advantage of flicker fusion where a predator is unable to accurately detect the position of the prey. This potential explanation remains to be tested.

Knowledge gaps

The work shows how studying amphibians can help in understanding more generally evolutionary biology, behaviour and ecology. Dr Seshadri says, in this context, “Our work includes 988 species in which we are certain about parental care. There are over 7,000 species of anurans out there and we know very little about their ecology and behaviour. There is clearly a need to bridge knowledge gaps for amphibians are among the most threatened vertebrates.”



Read in source website

The Alpha and Wuhan strain recombinant was the first; the latest is a mixture of two Omicron sub-lineages

Mutations are a natural phenomenon when viruses replicate. Generally, RNA viruses have a higher rate of mutations compared with DNA viruses. However, unlike other RNA viruses, coronaviruses have fewer mutations. This is because coronaviruses have a genetic “proofreading mechanism” that corrects some of the errors made during replication. This is applicable to SARS-CoV-2 viruses too. As a result, SARS-CoV-2 viruses have “higher fidelity in its transcription and replication process than that of other single-stranded RNA viruses” says a February 2021 paper in Nature.

Providing fitness

The fate of new mutations depends on whether such mutations increase the fitness of the virus such as increasing the infectiousness of the virus and in light of many people being infected and/or vaccinated, the ability of the mutations to allow the virus to escape from immunity. Such mutations that provide increased fitness to the virus increase in numbers and become the dominant strain or variant.

But changes to the virus through natural collection of mutations involves small changes in the genome. But like in the case of influenza viruses, when a person is simultaneously infected with two different SARS-CoV-2 variants or strains or sub-lineages, chunks of genetic material from one variant can get mixed with the other. This is called recombination.

In the case of SARS-CoV-2 virus, such recombination has been seen right after the Alpha variant emerged. Alpha was the first variant to emerge in late-2020 in the U.K. At that time, the dominant strain that had spread to most countries was the Wuhan strain with a mutation called the D614G, which increased the transmissibility of the virus. According to a paper published on September 30, 2021 in the journal Cell, recombinant SARS-CoV-2 viruses were found in late 2020-early 2021 in the U.K.

Recombinant sequences

The recombinant virus had a combination of the Alpha variant and the Wuhan strain. The recombinant virus had the mutations seen in the spike protein of the Alpha variant while the remaining genome with the wild strain. Since the mutations seen in the Alpha variant made the virus more transmissible, the recombinant virus was found to spread. The researchers were able to find four instances of the recombinant virus spreading, including “one transmission cluster of 45 sequenced cases over the course of two months”. The researchers identified 16 recombinant sequences from a large dataset of 2,79,000 genome sequenced by U.K. up to March 7, 2021. Despite the recombinant virus inheriting the spike region with mutations from the Alpha variant, it did not have better fitness than the Alpha variant and hence did not become dominant.

After Alpha, the variant to emerge was the Delta. There was a short window when both Delta and Alpha were present in many countries before Delta wiped out the Alpha variant. Another study found a SARS-CoV-2 recombinant of Alpha and Delta variants. In mid-August 2021, researchers in Japan found six clinical isolates that were recombinants of the Alpha and Delta variants. In a preprint posted in medRxiv on October 14, 2021, the researchers say the recombinant could have emerged through simultaneous infection by both variants in a person but were unable to find any patient with mixed infection. Again, this recombinant did not have added fitness to increase in frequency. It just died out.

Sub-lineages

After Delta, the Omicron variant emerged and it was soon split into two sub-lineages BA.1 and BA.2. With both the Delta and Omicron variants being present simultaneously in many countries, there were “lots of opportunities to co-infect, recombine and transmit onwards”, says virologist Tom Peacock from Imperial College, London in a tweet. Researchers have so far found two possible recombinants — 1) Delta and BA.1, and 2) BA.1 and BA.2. The recombinant of Delta and BA.1 has been found in the U.K and France, while the recombinant of the Omicron sub-lineages BA.1 and BA.2 has been found in the U.K.

The recombination of Delta and BA.1 found in France is called XD, and it contains the “Spike protein of BA.1 and the rest of the genome from Delta. It currently comprises several tens of sequences.

The recombination of Delta and Omicron sub-lineage BA.1 found in the U.K. is called XF. According to Dr Peacock, XF has the spike and structural proteins from BA.1 and the remaining part of the genome from Delta. “It comprises several tens of sequences currently,” he says.

The recombination of two Omicron sub-lineages BA.1 and BA.2 has been found in the U.K and is called XE. It was also recently reported in two passengers who had arrived in Israel. It has the spike and structural proteins from BA.2 and the remaining genome from BA.1. The XE is the most prevalent with hundreds of genomes already sequenced.

“XD is maybe a little more concerning. It has been found in Germany, Netherlands and Denmark and it contains the structural proteins from Delta. If any of these recombinants were to act much differently than its parent it might be XD,” Dr Peacock tweeted.

Cases in U.S.

Recombination of Delta and Omicron variants was found in the U.S. as well. The researchers found the “existence of these three unique mutation profiles that present compelling evidence that a recombinant virus was generated during co-infection…This recombinant replicated sufficiently to reach copy numbers that were detected by sequencing,” they write in a preprint posted in medRxiv on March 9, 2022. They identified “20 cases of co-infection with the Delta and Omicron variants, and two cases infected by a virus resulting from the recombination of Delta and Omicron”.

Different route

These two cases that contain only the recombinant virus of Delta and Omicron suggest that the actual recombination had happened in another person and increased in numbers in that person through replication and then effectively transmitted to a new host. “Yet, despite transmitting to a new host at least once, the transmission chain was not sustained; we have not observed any more of these recombinants in our sequencing data,” they write.



Read in source website

Initially, Eugene Parker was mocked for his findings on gigantic eruptions from the sun. Today, NASA’s Parker Probe will pass through the sun’s outer atmosphere.

Eugene Parker, a physicist who theorized the existence of solar wind and became the first person to witness the launch of a spacecraft bearing his name, has died, his son and the University of Chicago said Wednesday.

His son, Eric Parker, said Eugene Parker died peacefully at a retirement community in Chicago on Tuesday, about a decade after being diagnosed with Parkinson's disease. He was 94.

NASA administrators and university colleagues hailed Parker as a visionary in his field of heliophysics, focused on the study of the sun and other stars. He is best known for his 1958 theory of the existence of solar wind — a supersonic flow of particles off the sun's surface.

“Dr. Eugene Parker’s contributions to science and to understanding how our universe works touches so much of what we do here at NASA,” NASA Administrator Bill Nelson said in a statement. “Dr. Parker’s legacy will live on through the many active and future NASA missions that build upon his work.”

Parker recalled in 2018 that his solar wind theory was widely criticized and even mocked at publication. He was vindicated in 1962 when a NASA spacecraft mission to Venus confirmed his theory and solar wind's effect on the solar system, including occasional disruptions of communications systems on Earth.

The experience became part of Parker's identity as an educator and mentor.

“If you do something new or innovative, expect trouble,” he said in 2018 when asked to give advice to early career scientists. "But think critically about it because if you’re wrong, you want to be the first one to know that.”

Parker was born in 1927 in Houghton, Michigan. He studied physics at Michigan State University and California Institute of Technology, then worked as an assistant professor at the University of Utah before coming to the University of Chicago in 1955.

Eric Parker said he and his sister, Joyce, simply knew their dad was a scientist and didn't learn about his stature in the field until later in their lives.

The elder Parker would occasionally rise from the dinner table to jot down an idea, his son said. But his children most remember Parker as an involved dad and an avid hiker, camper and craftsman who carved busts of famous figures from wood and made much of the family's furniture.

“He always felt like workaholics were missing out,” Eric Parker said Wednesday. “He loved his job and he would tell you that when he discovered physics, he would have done it as a side gig because he enjoyed it so much. But he would also go on and on that if you’re getting over 40 hours a week in your job, you were missing out on the rest of life.”

In addition to his children, Eugene Parker is survived by his wife, Niesje, and three grandsons.

Parker Solar Probe

NASA honored Mr. Parker's scientific contributions in 2018 by naming a spacecraft after him that was destined to travel straight into the sun's crown. The Parker Solar Probe's successful launch — which the then-91-year-old Parker attended — has since provided unprecedented close views of the sun.

Angela Olinto, dean of the physical sciences division at the University of Chicago, accompanied Parker to the launch. She recalled his seemingly boundless energy in the early morning hours preceding the launch and his childlike grin when everything went smoothly.

“He was this ideal of a physicist: a person who has a strong intuition, who can see one step ahead and who can then sit down and show the intuition is correct,” Olinto said

Dr. Nicola Fox, director of NASA's Heliophysics Division, said Parker “was a visionary," adding that she will miss sharing the latest data from the probe's travels with him.

“Even though Dr. Parker is no longer with us, his discoveries and legacy will live forever," Fox said.



Read in source website

The Hindu’s weekly Science for All newsletter explains Science, without the jargon.

This article forms a part of the Science for All newsletter that takes the jargon out of science and puts the fun in! Subscribe now!

In the early 20 th century, as cars were beginning to be popular in the United States, lead was first added to petrol to help keep car engines healthy. However, a recent study calculates that exposure to car exhaust from leaded gas during childhood stole a collective 824 million IQ points from more than 170 million Americans alive today, about half the population of the United States.

Lead in cars was banned in the US in 1996, but as a consequence, Americans born before 1996 may now be at greater risk for lead-related health problems as several had worryingly high lead exposures as children. Lead is neurotoxic and can erode brain cells after it enters the body. As such, there is no safe level of exposure at any point in life, health experts say. Young children are especially vulnerable to lead’s ability to impair brain development and lower cognitive ability. Unfortunately, no matter what age, our brains are ill-equipped for keeping it at bay.

Lead is able to reach the bloodstream once it’s inhaled as dust, or ingested, or consumed in water and is able to pass through the blood-brain barrier, which otherwise, effectively keeps out a lot of toxicants and pathogens out of the brain, but not all of them. The researchers determined this relationship using publicly available data on US childhood blood-lead levels, leaded-gas use, and population statistics, they determined the likely lifelong burden of lead exposure carried by every American alive in 2015. From this data, they estimated lead’s assault on our intelligence by calculating IQ points lost from leaded gas exposure as a proxy for its harmful impact on public health.

(If this newsletter was forwarded to you, you can subscribe to get it directly here.)

From the Science Page

Children more unlikely to produce antibodies

A home-made analogy that helps study solar spicules in the lab

What causes the interrupted sleep of the elderly?

Question Corner

How do damaged plants warn neighbours about herbivore attacks? Read the answer here

Of Flora and Fauna

India’s solar capacity: Milestones and challenges 

An effort to save the enigmatic owls in India

Mulberry, sugarcane and bush orange: Meet three organic gardeners who are growing it all on their terraces 



Read in source website

Will the U.S.’s stiff curbs on Russia affect their collaboration on the International Space Station?

The story so far: After Russia invaded Ukraine on February 24, the U.S. imposed sanctions on Russia including a ban on transfer of technology and on Russian banks. Following this, on March 3, the Russian space agency Roscosmos tweeted the following, “The State Corporation will not co-operate with Germany on joint experiments in the Russian segment of the International Space Station. Roscosmos will conduct them independently. The Russian space programme against the backdrop of sanctions will be adjusted, the priority will be creation of satellites in the interests of defence. Roscosmos will not service the remaining 24 R-180 engines in the United States, and stop supplying the R-181. “

According to a Reuters report, this was followed by a statement from the head of the Russian Space Agency — Roscosmos — Dmitry Rogozin on Telegram, where he demanded the lift of the sanctions, some of which predate Russia’s invasion of Ukraine. He said that the sanctions could disrupt the functioning of the Russian spacecraft that serviced the International Space Station. This could lead to the Russian segment of the ISS, which helps in correcting the orbit of the ISS being affected. He said that this meant the ISS could fall into the sea or on the land. He further said that the Russian segment ensures that the space station’s orbit is corrected to keep it away from space debris, roughly 11 times a year. He pointed out, publishing a map, that the ISS would likely crash down on some country, but most probably not Russia itself.

What is Russia’s role in maintaining the ISS?

The ISS is built with the co-operation of scientists from five international space agencies — NASA of the U.S., Roscosmos of Russia, JAXA of Japan, Canadian Space Agency and the European Space Agency. Each agency has a role to play and a share in the upkeep of the ISS. Both in terms of expense and effort, it is not a feat that a single country can support. Russia’s part in the collaboration is the module responsible for making course corrections to the orbit of the ISS. They also ferry astronauts to the ISS from the Earth and back. Until SpaceX’s dragon spacecraft came into the picture the Russian spacecrafts were the only way of reaching the ISS and returning.

THE GIST
Russia’s part in the collaboration of the ISS is the module responsible for making course corrections to the orbit of the space station. They also ferry astronauts to and from the ISS.
If Russia backs out of the mission, SpaceX’s dragon module and Boeing’s Starliner are the other two options which can dock with the ISS.
Even though the U.S. imposed sanctions on Russia including a ban on transfer of technology, the scheduled missions for a transfer of crew on the ISS between the two countries seem to be unaffected.

Why does the orbit of the ISS need to be corrected?

Due to its enormous weight and the ensuing drag, the ISS tends to sink from its orbit at a height of about 250 miles above the Earth. It has to be pushed up to its original line of motion every now and then. This is rather routine, even for smaller satellites, says Dr. Mylswamy Annadurai, former director of ISRO and presently Vice President of Tamil Nadu State Council for Science and Technology.

Approximately once a month this effort has to be made. It is not necessarily a regular operation, and may be missed once and compensated for later.

The other reason for altering the path of the ISS is to avoid its collision with space debris, which can damage the station.

These manoeuvres need to be done as and when the debris is encountered.

What is the extent of effort and expense involved in this?

Manoeuvering the ISS is expensive. In a year, 7-8 tonnes of fuel may need to be spent, with each manoeuvre costing nearly a tonne of fuel. If a manoeuvre is put off for later, the ISS may sink a little more and the delayed operation would cost more as a larger correction needs to be made.

If Russia should back out of the effort, are there spacecrafts that can substitute?

There are right now two possibilities. SpaceX’s dragon module and Boeing’s Starliner can dock with the ISS. Starliner also has the capacity to carry, say, ten tonnes of fuel.

What is the likelihood of Russia backing out?

Though there have been previous occasions when conflicts have risen between Russia and the U.S., the operation of the ISS has not been interrupted. Dr. Annadurai points out that there are two missions planned for March 18, and one astronaut is already there on site. The mission means to take up two Russians and an American astronaut, and the preparatory work is in progress. On March 30, it is planned that the mission will return an American astronaut to Earth from the ISS. These seem to be going on as per plan.

“Going by the scientists’ mindset and that such a significant global effort must not go down the drain, my feeling is that scientists from both sides will work together and that this effort will not be in vain,” says Dr. Annadurai.

Is it true that Russia does not have the risk of the ISS crashing down on their country?

The orbit of the ISS does not fly over the Russian territory mostly. Places that are closer to the equator run a greater risk of it falling in their domain. The orbit is at about 50 degrees and so most probably, the ISS will fall in that level. But this is only a probability, as it can move or disintegrate. But in case of this eventuality, people in the ISS will be brought back, modules can be detached thereby making it much smaller which will ensure that it disintegrates before touching the earth.



Read in source website

What is the GenoMICC research project? Will the identification of new genes aid the development of new treatments for the disease?

The story so far: Scientists in the United Kingdom as part of a research project, GenOMICC (Genetics of Mortality in Critical Care), have identified 16 new genetic variants that make a person more susceptible to a severe COVID-19 infection.

What is the GenOMICC study?

The GenOMICC— reportedly the largest of its kind — is a research study that brings together clinicians and scientists from around the world to find the genetic factors that lead to determine the outcome in critical illnesses. While millions suffer from infectious diseases every year, even though most cases are mild, some people become extremely unwell and need critical care. This may be because of their genes and the GenOMICC project is about identifying them. The scientists involved compare the DNA of critically-ill patients with members of the general population. However, ferreting out such differences requires a large number of people and comparing their genetic structures at multiple levels of resolution. Since 2015, the GenOMICC has been studying emerging infections such as SARS (severe acute respiratory syndrome), MERS (Middle East respiratory syndrome), flu, sepsis, and other forms of critical illness.

How was the GenOMICC study for COVID-19 done?

Researchers from the GenOMICC consortium, led by the University of Edinburgh in partnership with Genomics England, sequenced the genomes of 7,491 patients from 224 intensive care units in the United Kingdom. Their DNA was compared with 48,400 other people who had not suffered from COVID-19, and that of a further 1,630 people who had experienced mild symptoms. Determining the whole genome sequence for all participants in the study allowed the team to create a precise map and identify genetic variation linked to severity of COVID-19.

What are the key findings?

The team found key differences in 16 genes in ICU patients compared to the DNA of the other groups. It also confirmed the involvement of seven other genetic variations already associated with severe COVID-19 discovered in earlier studies by the same team. The 16 new genetic variants included some that had a role in blood clotting, immune response and the intensity of inflammation. A single gene variant, the team found, disrupted a key messenger molecule in immune system signalling — called interferon alpha-10 — that increased a patient’s risk of severe disease. There were variations in genes that control the levels of a central component of blood clotting — known as Factor 8 — that were linked with critical illness in COVID-19. This highlights the gene’s key role in the immune system and suggests that treating patients with interferon, which are proteins released by immune cells to defend against viruses, may help manage disease in the early stages.

How useful are these findings?

The overarching aim of genome association studies is to not only correlate genes but also design treatments. For instance, the knowledge that interferons play a role in mediating a severe infection is already being used in drug therapies in the management of severe COVID. A study called the COVIFERON trial tested three kinds of interferon on the management of severe COVID but found no significant benefit in alleviating disease. Genomics studies reveal an association with certain conditions but don’t necessarily explain how the genes direct the chain of chemical reactions that bring about an adverse outcome. But the knowledge of the gene helps to design targeted drugs. New technologies, such as CRISPR, allow genes to be tweaked or silenced and therefore this approach could be used to make new medicines. The GenOMICC study isn’t the only one of its kind. Several consortia globally are working on identifying genes that may explain different disease outcomes.

THE GIST
The GenOMICC project is a research study that brings together clinicians and scientists from around the world to find the genetic factors that lead to critical illnesses.
To understand the genetic causes of severe COVID-19, the DNA of 7,491 critical patients was compared with 48,400 people who had not suffered from COVID-19, and that of a further 1,630 people who had only experienced mild symptoms.
The study found key differences in 16 genes in ICU patients compared to the DNA of other groups. The new variants included some that had a role in blood clotting, immune response and the intensity of inflammation.


Read in source website

Scientists at the Raman Research Institute continue the decade-long quest

In a country of a billion phones, hungry for every bit of radio signal, is a group of scientists looking for spots where one can escape them.

This continuing decade-long quest, led by scientists at the Raman Research Institute (RRI), Bengaluru has taken them multiple times to Ladakh, and to a place aptly named the Timbuktu Collective in Andhra Pradesh, and to lakes in northern Karnataka, with their radio telescope SARAS, which hopes to catch the trace of an extremely elusive sign from space — that of the birth of the first stars or what’s called “the cosmic dawn”. Harvard astronomer Abraham Loeb has remarked that the discovery of such a signal “would be worth two Nobel Prizes” because it would throw light on the structure of the universe in its infancy.

Reverberations of the Big Bang that birthed our universe 13.8 billion years continue to linger in a swathe of radiation called the cosmic microwave background (CMB). At a very specific region in this spectrum, current cosmological models of the universe say, there’s a point where the microwave radiation is a little dim and this, these models say, is because light from the first stars may have made hydrogen extra opaque at specific radio wavelengths.

Several groups around the world have designed custom-made, highly sensitive radio telescopes and are placing them in regions as remote as deserts in Australia to an island in the Antarctic ocean and, if a proposal comes through, in the lunar orbit.

Ravi Subrahmanyan, former Director at the RRI, has led efforts since 2010 using the Shaped Antenna Measurement of the Background Radio Spectrum (SARAS), but an astounding 2018 result from an American group at the Arizona State University propelled several groups, including that of Dr. Subrahmanyan’s, to sharpen their quest.

The EDGES telescope, or the Experiment to Detect the Global Epoch of Reionization Signature (EDGES) that was placed in an Australian desert, recorded an unusual signal that the group claims is the sign of the cosmic dawn. However the signal’s pattern wasn’t shaped in the way cosmological models predicted and since 2018, when the EDGES result was published, there’s a flurry of interpretation on whether the instrument actually detected the holy grail signal, and if it did, what explained its unusual structure.

To test this, the RRI group made an updated version of SARAS, called SARAS-3. Its chief distinguishing characteristic is that, unlike other radio telescopes, it can be deployed on water bodies. The many layers of soil were themselves a source of radio wave contamination for ground based telescopes. Given that the purpose is to detect a highly elusive signal, water — being of uniform layers — would be an ideal medium, the group reckoned, to make such a sensitive measurement.

In 2020, the radio telescope was deployed in lakes in northern Karnataka, on the Dandiganahalli lake and Sharavati backwaters, to detect the EDGES signal.

Following weeks of observations and months of statistical analysis by Saurabh Singh, research scientist at the RRI, SARAS 3 did not find any evidence of the signal claimed by the EDGES experiment. The group’s paper in the journal Nature Astronomy noted that the “profile... is not of astrophysical origin... their best-fitting profile is rejected with 95.3% confidence... Our non-detection bears out earlier concerns and suggests that the profile found... is not evidence for new astrophysics or non-standard cosmology.”

Dr. Singh told The Hindu that the quest for the signature was still on. Following the measurements on the lake, the group is planning to revisit Ladakh and place the telescope in one of the lakes there in the hope of improving their odds of detecting the signal.

In fact, it’s not just any lake but freshwater lakes that are a suitable candidate simply because salinity levels of the water in other lakes could also interfere with the readings. Ladakh’s lakes are by no means a final frontier, Dr. Singh said, as the team is open to prospect more sites — from northeastern India to the deserts of Rajasthan — in its quest. “There’s a lot unknown about how those early stars looked,” Dr. Singh said. “Actually, seeing the signal would reveal more about their composition and how the early universe looked.”



Read in source website

Paint poured on the mouth of the speaker, when fed music, breaks out in spicule-like jets

A team of interdisciplinary researchers from India and the U.K. led by astronomers from the Indian Institute of Astrophysics, Bengaluru, have explained the origin of ‘spicules’ on the Sun, using laboratory experiments as an analogy.

The Sun, our closest star, continues to present us with numerous puzzles. One problem concerning the Sun that our astronomers are keen to study has to do with solar spicules. These are jets of plasma, shooting out from the Sun’s outermost layer – the Chromosphere – and making incursions into its atmosphere.

Modelling spicules

Many modellers have tried, but unsuccessfully, to match the size and abundance of these features, which play important roles in at least two deep problems in solar physics. Now, in the study, published in Nature Physics, these researchers have found a way to study spicules in the lab using an analogous system – paint is poured on the mouth of a speaker which is fed the music that causes it to break out in spicule-like jets!

Solar spicules rise like forests from the Sun’s Chromosphere and pierce the Sun’s atmosphere or Corona. A typical spicule may be 4,000-12,000 kilometres long and 300-1,100 kilometres wide. These are structures that are believed to transport momentum to the solar wind and to provide heat to the solar Corona, which, intriguingly, can be a million degrees Celsius hotter than the Chromosphere.

The researchers found an analogous system in the most unlikely of places – a blob of paint dancing on the surface of the mouth of an audio speaker. Normally, if you place a liquid in a petri dish on the mouth of a speaker and turn up the frequency of the sound passing through it, at some frequency, the liquid’s free surface becomes unstable and starts vibrating. If the liquid is like paint or shampoo, instead of forming droplets, it will form long jets. This is because the fluid’s long polymeric chains give it a directionality.

Unlikely collaborator

“The spark came from our (then 8-year-old) daughter watching videos of paint dancing on a big (bass) speaker’s cone in slow-motion and commenting that they look like ‘your spicule videos’,” says Dr Piyali Chatterjee who is at the Indian Institute of Astrophysics, in an email to The Hindu. She conveyed this to her collaborator and husband, Dr. O.V.S.N. Murthy, who is at the School of Arts and Sciences at Azim Premji University, Bengaluru, and an animated discussion followed in which they estimated that accelerations must be several times that of respective gravity values. Dr. Murthy started experimenting on iodinated poly vinyl alcohol (also used in photographic films) and poly-ethylene alcohol. Their observations convinced them that they were on the right track with the analogy.

“Publish-worthy experiments were performed and filmed with a better camera, long-chain polyethylene oxide system, again, at home, while the disruption [due to lockdowns] continued, and we were able to set up simulations,” says Dr Chatterjee.

Sun simulations

The researchers then did a simulation of the Sun’s surface and showed that a similar mechanism to what they used in the lab experiment can create spicules in the solar plasma. “Numerically driving a harmonic or Faraday-like excitation in plasma was something that we borrowed from the lab experiments,” says Dr. Chatterjee. The simulation showed them the strong similarity between the two systems. The simulation also matched the solar spicules quantitatively.

When asked about the fundamental questions that this method can answer, Dr. Chatterjee says the following: Trying to understand the origin and nature of solar spicules is of fundamental importance for not just Coronal heating but also mass supply to solar wind. The spicules are believed to act like channels to transport mass, momentum and energy to the Corona of the Sun.

The team from the U.K. worked on data analysis from observations taken by the IRIS spacecraft and contributed advanced processing techniques, says Dr. Chatterjee, while she and her PhD student Sahel Dey did the simulations. Dr. Murthy designed and performed the experiments.



Read in source website

Animals often use highly specific signals to warn their herd about approaching predators. Surprisingly, a similar behaviour is also observed in plants. Shedding more light on this phenomenon, Tokyo University of Science researchers have discovered one such mechanism. Using Arabidopsis thaliana as a model system, the researchers have shown that herbivore-damaged plants give off volatile chemical ‘scents’ that trigger epigenetic modifications in the defence genes of neighbouring plants. These genes subsequently trigger anti-herbivore defence systems.

Prior studies have shown that when grown near mint plants, soybean and field mustard (Brassica rapa) plants display heightened defence properties against herbivore pests by activating defence genes in their leaves, as a result of "eavesdropping" on mint volatiles. Put simply, if mint leaves get damaged after a herbivore attack, the plants in their immediate vicinity respond by activating their anti-herbivore defence systems in response to the chemical signals released by the damaged mint plant. To understand this mechanism better, a team led by Tokyo University of Science, studied these responses in Arabidopsis thaliana, a model plant used widely in biological studies (Plant Physiology).

First, researchers exposed the plants to beta-ocimene, a volatile organic compound often released by plants in response to attacks by herbivores like Spodoptera litura. Next, the researchers tried to determine the exact mechanism of action of volatile-chemical-activated plant defence. They found that the volatile chemicals released by the damaged plants enhanced histone acetylation and the expression of defence gene regulators. The team found a specific set of enzymes were responsible for the induction and maintenance of the anti-herbivore properties, a press release says.



Read in source website

Study comparing adults and children was done when Wuhan strain was in circulation; results may not hold true for Delta and Omicron

A small study involving 108 participants — 57 children and 51 adults — found that compared with adults, a higher proportion of children did not produce antibodies in response to SARS-CoV-2 infection (seroconversion). All the 108 participants were either asymptomatic or had only mild symptoms. The lack of antibodies in children becomes particularly stark as both adults and children had comparable viral load.

The study was carried out between May 10 and October 28, 2020 at the Royal Children’s Hospital, Melbourne, Australia. The study looked at the ability of adults and children to produce antibodies when infected with the Wuhan strain of the virus. Whether children would exhibit the same characteristics in the case of the Delta and Omicron variants, where people tend to have far higher viral loads, is not known.

The study recruited children and adults infected with SARS-CoV-2 and their household members, and samples were collected from the throat and nose to detect the virus; blood samples were collected to measure humoral responses.

The results were published in the journal JAMA Network Open.

Using three serological assays, the team of researchers led by Paul V. Licciardi from the Royal Children’s Hospital found that only 20 of 54 children produced antibodies on being infected, compared with 32 of 42 adults. This was despite the viral load being comparable. The mean cycle threshold (Ct) value for adults was 24.1 while it was 28.5 in the case of children. The smaller the Ct value, the higher is the viral load. However, the researchers say that when the Ct value was less than 26, both adults — 10 of 11 (90.9%) — and children — 12 of 15 (80%) developed antibodies.

“The findings of this cohort study suggest that among patients with mild COVID-19, children may be less likely to have seroconversion than adults despite similar viral loads. This finding has implications for future protection after SARS-CoV-2 infection in children and for interpretation of serosurveys that involve children,” they write.

One reason why 34 children did not develop antibodies despite being infected could be because children have been found to mount a “stronger and faster response” to infection than adults. That would mean that children are able to clear the virus so quickly that the immune system is not triggered to produce antibodies against the virus. So in the absence of antibodies, it is not clear if these children would be protected against reinfection.

The authors found that a higher proportion of adults who did not develop antibodies were asymptomatic — four of 10 (40%). In contrast, adults who had symptoms had greater possibility of developing antibodies. In the case of children, a higher proportion of children who had antibodies did not have any symptoms on infection compared with children who did not develop antibodies. “This outcome suggests that the host humoral response to SARS-CoV-2 infection in children is different in adults despite similar viral loads and exposure to circulating virus variants,” they write.

One reason why children are able to clear the virus without even producing antibodies may be because unlike adults, children have a more robust innate and/or mucosal immune response to the virus. The faster clearance of the virus by children could be again because children have a robust innate immune response than adults. But the authors were not able to confirm these statements in the study.

It was previously shown that adults who did not have antibodies were 80% more susceptible to getting reinfected compared with adults who developed antibodies. Even when adults developed antibodies, the risk of reinfection was higher when the level of antibodies was low. But even antibody levels were low, adults who were reinfected had lower viral load than adults who did not have antibodies and got reinfected.

“Therefore, a lack of seroconversion [lack of antibodies] may result in a higher susceptibility to reinfection. This hypothesis may have important implications on the transmission of SARS-CoV-2 in the community and the public health response,” they write.



Read in source website

Lateral hypothalamus plays an outsized role in wakefulness, feeding behaviour, learning and sleep

A common human wish is to be able to sleep like a baby. Indeed, in adults, the total hours of sleep, and the quality of this sleep declines with age. Older people are especially prone to listless, fragmented sleep. A chronic drop in the quality and quantity of sleep can lead to diminished mental and physical health, and to a reduced lifespan (Mander et al., Neuron, 94, 19 (2017)).

Research has provided many clues to what induces sleep in humans. The pineal gland, at night, releases the hormone melatonin which is involved in regulating the sleep-wake cycle. This has made it a popular supplement for overcoming insomnia, although its effectiveness beyond the short term remains debatable.

However, our ‘awake’ state is much more complex, because nearly the whole brain is involved. This is perhaps why we are perplexed by the often-disrupted sleep of the elderly, where wakefulness repeatedly encroaches upon blissful sleep.

It is known that in older people with sleep disturbances, degeneration of nerve cells is seen in brain centres involved in the coordination of voluntary movements. A recent study has added a new dimension to our knowledge (Li et al., Science, 375, 2022). The study points to the hypothalamus, which lies in the centre of the brain and is the size and shape of an almond. An area in this part of the brain, the lateral hypothalamus, plays an outsized role in wakefulness, feeding behaviour, learning and sleep. Emanating from here are a bunch of nerve cells that fan out and project their nerve ending to all the parts of the central nervous system that are associated with the state of arousal. The chemical message released by these neurons is in the form of small proteins, called hypocretins and also known as orexins (The two names come from two groups of scientists who independently discovered these neuropeptides in 1998).

Like all neurons, Hcrt/OX neurons have endings called synapses, which may be next to the synapse of another neuron, or next to a muscle cell. Electrical signals pass along the length of neurons until they come to the synapse, where they are fleetingly transformed into chemical signals, which cross over and generate a response in the adjacent neuron. In the language of neuroscience, an excitatory signal will lead to the firing of the next neuron – electrical signals are conducted to a synapse at the other end of that neuron. Inhibitory signals tamp down the firing of an active neuron. Hypocretin tends to be excitatory, stimulating the neurons that it reaches.

Hypocretin stimulates wakefulness, and with it, motivated behaviour such as seeking food or a mate, as well as responsiveness to cold, nausea or pain. Of the 86 billion neurons in the human brain, less than 20,000 produce hypocretin, but their influence is profound. Most of all, hypocretin is important for maintaining prolonged periods of wakefulness. Directly injecting hypocretin into the cerebrospinal fluid (so that it is quickly delivered inside the brain) will keep you wide-awake for several hours. And neurons that produce hypocretin are no longer active when you are asleep.

In experiments, mice deprived of food stay awake and busy for a very long time while they search for food. Mice lacking hypocretin, in which the hypocretin gene has been knocked out, are far less motivated in their hunt.

Fractionation of sleep

What happens to the sleep of the elderly? Li et al. show that with age, changes occur in these hypocretin-producing neurons. They become hyperexcitable, conducting signals and releasing neuropeptides at a very low threshold, at the slightest provocation. The unwanted activation of inactive hypocretin neurons leads to the fractionation of sleep. Changes in aged neurons thus make it more difficult to inhibit their activity.

There is a rare disorder of the nervous system triggered by the loss of Hcrt/OX neurons. Narcolepsy has strange characteristics – an overwhelming desire to sleep in the daytime, even though the total hours of sleep remain unchanged; a tendency to hallucinate as the sleep-wake phases are blurred; frequent loss of muscle tone – cataplexy – during which muscles become flaccid. Only a handful of cases have been documented in India, mostly men in their thirties (Ray, Indian Journal of Medical Research, 148, 748 (2018)). Patients with this condition have vanishingly low amounts of hypocretin in their cerebrospinal fluids.

Finally, can sufferers of fractured sleep dream of ways to bring better constancy to their sleep? In aged mice, the analgesic Flupirtine, although beset with toxicity issues, appears to raise back the threshold at which hypocretin neurons get excited, thus restoring the structure of sleep.

(The article was written in collaboration with Sushil Chandani who works in molecular modelling. sushilchandani@gmail.com)

dbala@lvpei.org



Read in source website

This is the first recorded unintentional case of space junk hitting the moon.

The story so far: A leftover piece of a spacecraft flying through space reportedly hit the surface of the moon last Friday, creating a new crater that may be around 65 feet wide. The piece of space junk was earlier believed to be a SpaceX rocket, but was later said to be the third-stage booster of Chang'e 5-T1 – a lunar mission launched by the China National Space Administration in 2014. China, however, denied responsibility, saying that the booster in question had "safely entered the earth's atmosphere and was completely incinerated", news agency AFP reported. According to orbital calculations, the collision took place on March 4 at 5.55 p.m. IST on the far side of the moon. The object reportedly weighs around four tonnes and was racing towards the moon at a speed of 9,300 km an hour. The speed, trajectory, and time of impact were calculated using earth-based telescope observations.

How was the object spotted in space?

American astronomer Bill Gray was the first to predict the collision. In January 2022, Gray had said that a booster from a SpaceX Falcon 9 rocket was likely to hit the moon after seven years of floating in space. Gray later corrected his prediction, saying that the space junk was part of a Chinese lunar mission and not from SpaceX.

Gray runs Project Pluto, a blog that tracks near-earth objects. Project Pluto also supplies astronomical software to amateur and professional astronomers. Gray is the creator of popular astronomy software called Guide.

The astronomer explained the process of ascertaining the date and time of the collision in a blog post on Project Pluto. He informed that the object was first spotted during an asteroid survey in 2015 and was believed to be a part of the Deep Space Climate Observatory (DSCOVR) satellite that was launched by SpaceX on Falcon 9 rocket on February 11, 2015.

Gray continued to track the object, and after analysing data that came in from nine different observatories in January 2022, he was able to improve the accuracy of the object’s trajectory and give a confident prediction of the date and time of the object’s collision with the surface of the moon.

The astronomer has also said that this is the first recorded unintentional case of space junk hitting the moon.

Why did Gray change his prediction about the identity of the object?

According to Gray, an email from Jon Giorgini of NASA’s Jet Propulsion Laboratory in California made him retrace the trajectory of DSCOVR, and it was seen that the SpaceX spacecraft did not go close to the moon. “It would be a little strange if the second stage went right past the moon, while DSCOVR was in another part of the sky. There's always some separation, but this was suspiciously large,” Gray wrote on his blog.

The astronomer then studied the trajectory of the object backwards, and discovered a lunar flyby on October 28, 2014. The Chang'e 5-T1 mission was launched on October 23, 2014, providing evidence that the object was indeed a leftover from the same mission.

A team at the University of Arizona also studied the object and confirmed that it resembles the Chinese rocket and not that of SpaceX.

How will the impact be confirmed?

NASA’s Lunar Reconnaissance Orbiter and ISRO’s Chandrayaan-2 orbiter are two active lunar missions that are capable of observing the crater and picturing it. The location of the impact – on the far side of the moon – has made it difficult for the crater to be pictured and studied immediately.

Is the crater permanent?

Both the earth and the moon have been hit by multiple objects like asteroids throughout their existence, but craters on the moon are of a more permanent nature than those on earth. This is because of processes like erosion, tectonics, and volcanism. According to NASA, these three processes keep the surface of the earth crater-free and remove traces of collisions that have happened in the past. Currently, the earth has less than 200 known craters while the moon has thousands.

An absence of atmosphere means there is no wind system and no weather on the moon, and hence no cause for erosion of existing craters. Absence of tectonics prevents the moon’s surface from forming new rocks, or causing a shift in the existing surface patterns, unlike that on earth. Lastly, absence of volcanism makes it impossible for craters to be covered.

THE GIST
A piece of a spacecraft flying through space reportedly hit the surface of the moon last Friday. The piece is around 65 feet wide.
The piece of space junk, a third-stage booster of Chang’e 5-T1 , a lunar mission launched by China, weighs four tonnes and was moving at a speed of 9,300 km an hour.
American astronomer Bill Gray, who runs a blog called Project Pluto, predicted the collision by tracking the object.


Read in source website

The star is more than 12.9 billion light-years away and likely existed within the first billion years after the beginning of the universe.

NASA’s Hubble Space Telescope has discovered the farthest star ever seen to date. The star is more than 12.9 billion light-years away and likely existed within the first billion years after the beginning of the universe. The star system is officially called WHL0137-LS, but it has been nicknamed “Earendel”, which means “morning star” in Old English.

While a lot of evidence points towards Earendel being a star, scientists will have to wait for more data before confirming whether it is a single star or a cluster of two or more. This discovery is a massive leap from the previous record-holding star: “Icarus” or officially, MACS J1149+2223 Lensed Star 1. Icarus existed at a time when the universe was about 4 billion years old or about one-third of its current age.

Scientists refer to this time as redshift 1.5. The word “redshift” is used because as the universe expands, light from distant objects shifts to a longer wavelength, appearing more reddish.

The researchers behind the discovery documented their findings in a research article published in Nature on March 30. “We almost didn’t believe it at first, it was so much farther than the previous most-distant, highest redshift star,” said astronomer Brian Welch of the Johns Hopkins University, lead author of the paper, in a press statement.

“Normally at these distances, entire galaxies look like small smudges, with the light from millions of stars blending together. The galaxy hosting this star has been magnified and distorted by gravitational lensing into a long crescent that we named the Sunrise Arc,” explained Welch. According to him, studying Earendel will be a window into an era of the universe that humans are unfamiliar with.

The researchers estimate that Earendel is at least 50 times the mass of our sun and a million times as bright. But even such a massive and bright star would have been impossible to see if it weren’t for the phenomenon of gravitational lensing.In this detailed view, Erandel’s position can be seen along a ripple in space-time (the dotted line) that magnifies it and makes it possible for the star to be seen over such a great distance. (Image credit: Science: NASA, ESA, Brian Welch (JHU), Dan Coe (STScI); Image processing: NASA, ESA, Alyssa Pagan (STScI))

Gravitational lensing occurs when a cluster of stars warps the fabric of space. This creates a sort of massive magnifying glass that distorts and amplifies the light from distant objects behind it. In the case of Earendel, this is caused by a huge galaxy cluster called WHL0137-08.

Scientists expect Earendel to remain highly magnified in the years to come when it can be observed by NASA’s new James Webb Space Telescope. Webb has a high sensitivity to infrared light which will be useful when trying to learn more about the newly-discovered star because its light is redshifted to longer infrared wavelengths.

Scientists will need data from Webb to conclusively confirm that Earendel is indeed a single star and even to measure its brightness and temperature, which will yield more information about its type, composition, etc.



Read in source website

This stellar event could potentially cause auroras in the sky over North America and Europe.

Multiple solar eruptions all from a single sunspot have blasted into space recently, according to Space.com According to the site, the sun eruptions originated from an overactive sunspot called AR2975, which apparently has been firing off flares since March 28. This stellar event could potentially cause auroras in the sky over North America and Europe.

“Strong G3-class geomagnetic storms are possible during the early UT hours of March 31st when a Cannibal CME is expected to hit Earth’s magnetic field. During such storms, naked-eye auroras can descend into the USA as far south as, eg Illinois and Oregon,” says SpaceWeather.com, a space news and information site authored by professional astronomer Tony Phillips.

Sunspots are areas on the sun that appear darker than the rest of its surface. They appear when magnetic lines on the sun twist and suddenly realign near the visible surface.

These magnetic fields are so strong that they keep some of the heat within the sun from reaching the surface, therefore making sunspots cooler and therefore darker than the rest of the surface.

At times, these sunspots are associated with coronal mass ejections (CMEs), which are massive expulsions of plasma and magnetic fields from the sun’s corona (outer atmosphere). These CMEs travel outwards from the sun, usually between the speeds of 250 km/s (kilometres per second) and 3000 km/s.

But satellites, space stations, astronauts, aviation systems, GPS, and even power grids can be affected by very strong solar eruptions. On March 12, 1989, a severe solar storms caused by multiple CMEs took out Quebec’s (a Canadian province) entire electricity grid for over nine hours.


Read in source website

The 10-day mission, the first-ever private trip to the space station, will set off on April 3 with four astronauts.

Israel’s Brain.Space, a four-year-old startup that studies data on brain activity, is set to put its gear to test on astronauts in space next week during a SpaceX shuttle flight to the International Space Station (ISS). Three astronauts on the planned private space-flight firm Axiom Space’s mission to the ISS will use a special electroencephalogram (EEG)-enabled helmet made by Brain.Space, the company said on Monday.

The 10-day mission, the first-ever private trip to the space station, will set off on April 3 with four astronauts.”We actually know that the microgravity environment impacts the physiological indicators in the body. So, it will probably impact the brain and we would like to monitor that,” Brain.

Space Chief Executive Yair Levy told Reuters.Data has continuously been collected on heart rate, skin resistance, muscle mass and others in space but not yet on brain activity, he said.

Brain.Space joins 30 experiments that will take part in the so-called Rakia Mission to the ISS.Three of the four astronauts — including Israeli Eytan Stibbe — will wear the helmet, which has 460 airbrushes that connect to the scalp, and perform a number of tasks for 20 minutes a day, during which data will be uploaded to a laptop on the space station.

The tasks include a “visual oddball” one that the company says has been effective in detecting abnormal brain dynamics.

Similar studies using these tasks have been completed on Earth and after the mission, Brain. Space will compare the EEG data to see the differences in brain activity between Earth and space.

It noted that such experiments are needed since long-term space exploration and “off-world living are within grasp.

“Brain.Space, which also said it raised $8.5 million in a seed funding round, bills itself as a brain infrastructure company and is working with the cognitive and brain sciences department at Israel’s Ben Gurion University to transform terabytes of data into usable insights.Levy said he hoped the space mission would be a springboard for other institutions, researchers and software developers to use its brain data platform.

“Space is an accelerator. The idea is to revolutionise and make possible brain activity apps, products and services that’s as easy as pulling data from an Apple Watch,” Levy said, pointing to measuring ADHD as an example.



Read in source website

The combination of dual-band smartphone GPS receivers and Android’s support for raw GNSS data recording is what gave researchers the option to use smartphones for this data collection

The Camaliot Android app will allow users to turn their smartphones into a tool for crowdfunded science. All you need in order to participate is a compatible Android phone with a working satellite navigation feature and the Camaliot app which can be downloaded from the Google Play Store. The app is compatible with more than 50 devices, according to the project’s website.

Camaliot is a project funded by the European Space Agency (ESA) and is led by ETH Zurich in collaboration with a team at ESA. The researchers will combine data from many users’ phones with other sources of data using machine learning for applications like weather forecasting.

Apart from helping scientists create new earth and space weather forecasting models, participants also stand the chance to win phones and Amazon vouchers. This four-month ‘citizen science’ campaign runs until the end of July.

“Global Navigation Satellite Systems (GNSS) such as Europe’s Galileo have revolutionized everyday life,” explained ESA navigation engineer Vicente Navarro in a press release. “And the precisely modulated signals continuously generated by the dozens of GNSS satellites in orbit are also proving a valuable resource for science, increasingly employed to study Earth’s atmosphere, oceans and surface environments. Our GNSS Science Support Centre was created to help support this trend.”

As signals from satellites reach Earth, they are modified by the amount of water vapour in the lower atmosphere. As they pass through this vapour and other irregular patches in the atmosphere, they undergo ‘scintillation’ or fading and delaying. Data about this scintillation can reveal insights about the Ionosphere, where the earth’s atmosphere meets space.

The combination of dual-band smartphone GPS receivers and Android’s support for raw GNSS data recording is what gave researchers the option to use smartphones for this data collection.

Data from smartphones can then be combined with data from the thousands of GNSS stations on the ground on Earth with machine learning models to seek out previously unseen patterns in both Earth and space weather.



Read in source website

These waves move in the opposite direction to the rotation of the sun, three times faster than what should be allowed by hydrodynamics alone. Curiously, they also resemble a similar type of mysterious wave found in Earth’s oceans: Rossby waves.

Researchers from New York University, Abu Dhabi and the Tata Institute of Fundamental Research have discovered a set of new vorticity (spinning) waves coming from the Sun that move much faster than can be predicted with existing theories. The high-frequency retrograde (HFR) waves detected after 25 analysing 25 years of space and ground-based data moves in the opposite direction of the Sun’s rotation and appears as a pattern of vortices (fluid-like revolving motions) on the surface of the Sun and move at three times the speed predicted by current theory.

The researchers’ observations have been published in the Nature Astronomy journal. The unknown nature of these HFR waves makes it difficult to interpret and place them within the current context of solar dynamics and makes them difficult to explain.

The researchers tested three hypotheses that try to explain the waves: that they are caused by magnetic fields within the sun; that they come from gravity waves in the sun; and that they occur due to the compression of plasma. But none of the three hypotheses held up well against the data on the HFR waves.

But curiously enough, the behaviour of these waves is very similar to a type of wave found in Earth’s oceans known as Rossby Waves, which also travel much quicker than researchers can explain.

“The very existence of HFR modes and their origin is a true mystery and may allude to exciting physics at play,” said Shravan Hanasoge, a co-author of the paper, to EurekAlert, a science news service. “It has the potential to shed insight on the otherwise unobservable interior of the Sun.”

With the lack of any convincing explanation for these HFR waves, researchers have concluded in the article that “there are evidently missing, or poorly constrained, ingredients in the standard models of the Sun, and determining the mechanism responsible for HFR modes will deepen our understanding of the interiors of the Sun and stars.”



Read in source website

Ecologists have repeatedly argued that fire suppression and tree planting can actually reduce biodiversity in savanna and grassland ecosystems.

How fires alter carbon stocks (ie the carbon stored in the biomass and soils) in different biomes has been a longstanding question in ecology. A recent study examines the effect of different fire regimes in Kruger National Park, South Africa, which constitutes a tropical savanna ecosystem. Savannas are fire-dependent ecosystems where fire disturbance has played a key historical role in their evolution and maintenance.

It is important to note the difference between ‘forest’ fires – a term that is usually loosely used – and savanna fires. ‘Forest fires,’ as the name suggests, occur largely in temperate and boreal forests and burn trees. On the other hand, savanna fires only burn herbaceous plants that are close to the ground and, therefore, might release lesser CO₂ into the atmosphere.

While a biome’s response to fire is largely dependent on the vegetation type, savannas by-and-large exhibit a loss in carbon and nitrogen in surface soils, albeit to varying degrees. By contrast, soil C and N in temperate and boreal needle-leaf forests have been observed to be immune to fire. Fire also alters the composition and quantity of biomass. Biomass could refer either to the trees and shrubs (terrestrial aboveground biomass) or root networks under the soil (belowground/root biomass). Aboveground biomass also constitutes grasses and even dead leaves and shoots that fall off the vegetation (litter).

Here, however, Zhou et al. (2022) focus on both above- and below-ground biomass. The study employed a LiDAR sensor (light detection and ranging) on a drone to ascertain aboveground woody biomass (in terms of tree density), and a ground penetrating radar to determine belowground woody coarse lateral root biomass. The measurement of belowground biomass obtained via the aforementioned non-invasive method were supplemented with field-based measurements of woody taproot biomass.

The Kruger National Park (KNP) offers an excellent template to study the effects of varying fire regimes, their interplay with rainfall, and their ultimate impact on vegetation. For one, the park contains a wide climatic gradient in terms of precipitation and soil types. Secondly, KNP has been subject to a large experiment in fire manipulation that has been running since 1954.

The study examined plots in a high-rainfall region of the KNP with different fire regimes. One set of plots mimicked the natural fire frequency at KNP ie occurring once in three years. Another group of plots exhibited complete fire suppression, with no fire occurring at all. A third group consisted of plots that displayed an increased fire frequency, whereby fires occurred every year.

The study found that fire suppression, compared to triennial fires, increased tree height, tree cover and aboveground woody biomass. Fire suppression or less frequent fires also contributed to an increase in belowground woody biomass, but not in the same proportion as aboveground biomass as usually believed. The reason behind this is that trees in savannas that burn quite frequently invest heavily in belowground root biomass in order to be able to recover after fire. ‘This result contradicts the assumption of constant root-to-shoot ratio applied elsewhere to estimate belowground carbon.’

Further, contrary to previous observations, the study observed that soil organic carbon remained largely unaffected by changes in fire frequencies. In fact, a previous study in KNP even reported no significant loss in C and N post fire. Zhou et al. (2022) argue that this is because most studies sample soil only up to a very shallow depth, and effects of fire suppression are limited to the top horizons of the soil column. Additionally, the paper argues, that substantial carbon is stored in the soil in even treeless areas, owing in large part to the input C4-derived carbon into soils (grasses fix atmospheric CO₂ as four carbon molecules, hence.

But, the moot question is: does total fire suppression do anything to meaningfully mitigate CO₂ release and sequestration? According to the present study, whatever improvements in carbon sequestration were observed were largely due to increase in the aboveground woody biomass. However, the improvement is not very significant. When juxtaposed with triennial burning, a complete absence of fire for six decades ‘increased whole ecosystem carbon storage only by 35.4%, even though tree cover increased by 78.9%.’

Ecologists have repeatedly argued that fire suppression and tree planting can actually reduce biodiversity in savanna and grassland ecosystems. Tree planting programmes also tend of fail because seed germination rates are quite low in savanna and grassland biomes. Not only that, they may actually be counterproductive as an altogether absence of fire allows woody vegetation to develop for long periods of time, and leads to a massive fire when fire eventually occurs.

Calling to question studies that preceded them and advocated afforestation in grasslands and savannas or fire suppression, the paper asserts that ‘the benefits of trees-for-carbon and fire-suppression schemes for climate mitigation have been exaggerated.’

The author is a research fellow at the Indian Institute of Science (IISc), Bengaluru, and a freelance science communicator. He tweets at @critvik 



Read in source website

Not only was a new asteroid detected, but it was detected just before it struck Earthy, only the fifth time such a discovery has been made.

Written by Robin George Andrews

Movies that imagine an asteroid or comet catastrophically colliding with Earth always feature a key scene: a solitary astronomer spots the errant space chunk hurtling toward us, prompting panic and a growing feeling of existential dread as the researcher tells the wider world.

On March 11, life began to imitate art. That evening, at the Konkoly Observatory’s Piszkéstető Mountain Station near Budapest, Krisztián Sárneczky was looking to the stars. Unsatisfied with discovering 63 near-Earth asteroids throughout his career, he was on a quest to find his 64th — and he succeeded.

At first, the object he spotted appeared normal. “It wasn’t unusually fast,” Sárneczky said. “It wasn’t unusually bright.” Half an hour later, he noticed “its movement was faster. That’s when I realized it was fast approaching us.”

That may sound like the beginning of a melodramatic disaster movie, but the asteroid was just over 6 feet long — an unthreatening pipsqueak. And Sárneczky felt elated.

“I have dreamed of such a discovery many times, but it seemed impossible,” he said.

Not only had he spied a new asteroid, he had detected one just before it struck planet Earth, only the fifth time such a discovery has ever been made. The object, later named 2022 EB5, may have been harmless, but it ended up being a good test of tools NASA has built to defend our planet and its inhabitants from a collision with a more menacing rock from space.

One such system, Scout, is software that uses astronomers’ observations of near-Earth objects and works out approximately where and when their impacts may occur. Within the hour of detecting 2022 EB5, Sárneczky shared his data and it was speedily analyzed by Scout. Even though 2022 EB5 was going to hit Earth just two hours after its discovery, the software managed to calculate that it would enter the atmosphere off the east coast of Greenland. And at 5:23 p.m. Eastern time on March 11, it did just that, exploding in midair.

“It was a wonderful hour and a half in my life,” Sárneczky said.

Although EB5 was meager, it doesn’t take a huge jump in size for an asteroid to become a threat. The 55-foot rock that exploded above the Russian city of Chelyabinsk in 2013, for example, unleashed a blast equivalent to 470 kilotons of TNT, smashing thousands of windows and injuring 1,200 people. That Scout can precisely plot the trajectory of a tinier asteroid offers a form of reassurance. If spotted in sufficient time, a city faced with a future Chelyabinsk-like space rock can at least be warned.

It normally takes a few days of observations to confirm the existence and identity of a new asteroid. But if that object turns out to be a small-but-dangerous space rock that was about to hit Earth, deciding to wait on that extra data first could have disastrous results. “That’s why we developed Scout,” said Davide Farnocchia, a navigation engineer at the Jet Propulsion Laboratory who developed the program, which went live in 2017.

Scout constantly looks at data posted by the Minor Planet Center, a clearinghouse in Cambridge, Massachusetts, that notes the discoveries and positions of small space objects. Then the software “tries to figure out if something is headed for Earth,” Farnocchia said.

That Sárneczky was the first to spot 2022 EB5 came down to both skill and luck: He is an experienced asteroid hunter who was serendipitously in the right part of the world to see the object on its Earthbound journey. And his efficiency permitted Scout to kick into gear. Within the first hour of making his observations, Sárneczky processed his images, double-checked the object’s coordinates and sent everything to the Minor Planet Center.

Using 14 observations taken in 40 minutes by a sole astronomer, Scout correctly predicted the time and place of 2022 EB5’s encounter with Earth’s atmosphere. Nobody was around to see it, but a weather satellite recorded its final moment: an ephemeral flame quickly consumed by the night.

This isn’t Scout’s first successful prediction. In 2018, another diminutive Earthbound asteroid was discovered 8.5 hours before impact. Scout correctly pinpointed its trajectory, which proved instrumental to meteorite hunters who found two dozen remaining fragments at the lion-filled Central Kalahari Game Reserve in Botswana.

That won’t be possible for 2022 EB5.

“Unfortunately, it landed in the sea north of Iceland, so we won’t be able to recover the meteorites,” said Paul Chodas, the director of the Center for Near Earth Object Studies at NASA’s Jet Propulsion Laboratory.

Chodas said we also shouldn’t worry that this asteroid was detected only two hours before its arrival.

“Tiny asteroids impact the Earth fairly frequently, more than once a year for this size,” he said. And their sizes mean their impacts are typically without consequence. “Don’t sweat the small stuff,” Chodas said.

That Scout continues to demonstrate its worth is welcome. But it will be of little comfort if this program, or NASA’s other near-Earth object monitoring systems, identifies a much larger asteroid heading our way, because Earth presently lacks ways to protect itself.

A global effort is underway to change that. Scientists are studying how nuclear weapons could divert or annihilate threatening space rocks. And later this year, the Double Asteroid Redirection Test, a NASA space mission, will slam into an asteroid in an attempt to change its orbit around the sun — a dry run for the day when we need to knock an asteroid out of Earth’s way for real.

But such efforts will mean nothing if we remain unaware of the locations of potentially hazardous asteroids. And in this respect, there are still far too many known unknowns.

Although scientists suspect that most near-Earth asteroids big enough to cause worldwide devastation have been identified, a handful may still be hiding behind the sun.

More concerning are near-Earth asteroids about 460 feet across, which number in the tens of thousands. They can create city-flattening blasts “larger than any nuclear test that’s ever been conducted,” said Megan Bruck Syal, a planetary defense researcher at the Lawrence Livermore National Laboratory. And astronomers estimate that they have currently found about half of them.

Even an asteroid just 160 feet across hitting Earth is “still a really bad day,” Bruck Syal said. One such rock exploded over Siberia in 1908, flattening 800 square miles of forest. “That’s still 1,000 times more energy than the Hiroshima explosion.” And perhaps only 9% of near-Earth objects in this size range have been spotted.

Fortunately, in the coming years, two new telescopes are likely to help with this task: the giant optical Vera C. Rubin Observatory in Chile, and the space-based infrared Near-Earth Object Surveyor observatory. Both are sensitive enough to potentially find as many as 90% of those 460-foot-or-larger city killers. “As good as our capabilities are right now, we do need these next-generation surveys,” Chodas said.

The hope is that time will be on our side. The odds that a city-destroying asteroid will hit Earth is about 1% per century — low, but not comfortably low.

“We just don’t know when the next impact will happen,” Chodas said. Will our planetary defense system be fully operational before that dark day arrives?

This article originally appeared in The New York Times.



Read in source website

As an infrared telescope, Spitzer was uniquely suited to detecting the dust and debris created by collisions between celestial bodies.

A group of astronomers made over 100 routine observations of a distant ten-million-year-old star called HD166191 using NASA’s Spitzer Space Telescope and combined that with knowledge about the star’s brightness and size to arrive at information that will help scientists test theories about how planets are formed and how they grow. Their findings are published in The Astrophysical Journal.

The Spitzer Space Telescope was an infrared space telescope that was launched by NASA in 2003 and continued operating for sixteen years before it was finally decommissioned in 2019.

Most rocky planets, satellites and other celestial objects in the solar system, including the Moon and the Earth, were formed by massive collisions early in the early history of the solar system. Terrestrial bodies accumulate more material and increase in size with these collisions. They can also break apart into many smaller bodies this way.

The astronomers, led by Kate Su of the University of Arizona, began making observations for HD 166191 in 2015. Around the star’s early life, dust left over from its formation has clumped together to form small rocky bodies called ‘planetesimals’, which are potentially seeds for future planets.

After the gas that had previously filled the space between these objects dispersed, catastrophic collisions between them became more frequent. The scientists began makings these observations using Spitzer between 2015 and 2019, anticipating that they might be able to gather evidence of such collisions.

Even though the planetesimals themselves were too small to be captured by the telescope, their smashups produce large amounts of dust. As an infrared light telescope, Spitzer was uniquely suited to detecting the dust and debris created by these collisions.

Astronomers are able to record these observations by detecting when the debris cloud from one of these bodies passes in front of a star and briefly block light. This is called a transit.

In mid-2018, the HD 166191 system became significantly brighter for the Spitzer telescope, which suggests an increase in debris production. During the time, the telescope also detected a transit, or a debris cloud blocking the star.

The astronomers’ work suggests that this cloud is highly elongated with a minimum area estimated to be at least three times that of the star. However, the amount of infrared brightening detected probably means that only a small portion of the cloud passed in front of the star and that the debris from this event could even cover an area hundred times larger than that of the star.

To produce a debris cloud that big, the objects in the collision must be the size of dwarf planets — like Ceres in the asteroid belt between Mars and Jupiter, which is about 473 km wide. The initial clash would have generated enough energy and heat to vaporise some of the material and set off a chain reaction of impacts between fragments from the collision and other small bodies in the system. This could be the reason for the significant amount of dust captured by Spitzer.

Over the next few months, the dust cloud began growing in size and became more translucent until 2019, when the part of the cloud that passed in front of the star was no longer visible. But, by then, the system contained twice as much dust as it had before the cloud was spotted. According to the astronomers, this information can help scientists test theories about how terrestrial planets form and grow.



Read in source website

After the rehearsal, NASA will review data from the test before deciding on the launch date for the upcoming launch.

NASA’s Space Launch System (SLS) rocket with the Orion spacecraft atop have arrived  at the Kennedy Space Center in preparation for a final test before the space agency’s Artemis I Moon mission. The rocket has been rolled to the pad for a final test before launch. This test, called a wet dress rehearsal, will run the launch team through operations to load propellants into rocket’s tanks, conduct a full launch countdown, demonstrate the ability to recycle the countdown clock, and also drain the tanks to practice the timelines and procedures the team will use for launch.

Before the test, all systems will have to undergo checkouts at the pad. After the rehearsal, NASA will review data from the test before deciding on the launch date for the upcoming launch.

Several days after the test, the integrated rocket and spacecraft will roll back to the Vehicle Assembly building to remove the sensors used during the rehearsal, charge system batteries, stow late-load cargo, and run final checkouts. About a week before the launch, Orion and SLS will roll to the launchpad for a final time.

“Rolling out of the Vehicle Assembly Building is an iconic moment for this rocket and spacecraft, and this is a key milestone for NASA,” said Tom Whitmeyer, an administrator at the agency, in a press statement. “Now at the pad for the first time, we will use the integrated systems to practice the launch countdown and load the rocket with the propellants it needs to send Orion on a lunar journey in preparation for launch.”

With Artemis, NASA aims to establish long-term exploration at the Moon, preparing for a human mission to Mars, along with the human landing systems and a gateway in orbit around the moon. The uncrewed flight test mission will pave the way for many moon missions including ones that will land the first woman and the first person of colour on the Moon.



Read in source website

Russia's attack on Ukraine makes the EU-Russia joint effort to discover life on Mars impossible to realize at this time, says the European Space Agency.

Written by Clare Roth

The European Space Agency’s ExoMars 2022 mission won’t launch in September as planned after the agency suspended all cooperation with Russia’s space program Roscosmos.

Led collaboratively by Roscosmos and the ESA, the mission aims to study past life on Mars.

ESA’s Director General Josef Aschbacher called the September launch “practically impossible but also politically impossible,” given Russia’s invasion of Ukraine.

Aschbacher was speaking at a media conference to announce the decision made by ESA’s Council earlier this week.

A two-stage mission

ExoMars has two parts. The first part launched an orbiter and a lander in 2016, but the lander crashed. The September 2022 launch would have been a second installment to deliver a Mars rover to the planet.

This second part of the mission was originally planned for July 2020. But it was postponed until this September due to technical issues.

ESA had hinted at the decision to suspend collaboration with Russia in a February 28 press statement. That statement had said that the sanctions brought against Russia and the wider context of the Ukraine conflict made a 2022 launch “very unlikely.”

And now it’s been canceled fully — for this year.

But while ExoMars is on hold, International Space Station (ISS) operations were moving ahead as normal, Aschbacher said.

Three Russian cosmonauts join the crew this weekend, having launched on a Russian Soyuz rocket from Kazakhstan on Friday. And on March 30, a Russian capsule is scheduled to return two Russian and one American astronaut back to the Earth.

Russians pledge to go alone

Russia responded to ESA’s decision by saying it would go to Mars independently.

“Roscosmos will be able to carry out a Martian expedition on its own,” said the agency’s head Dmitry Rogozin in a statement.

“Yes, we’ll lose several years, but we’ll copy our landing module, provide it with an Angara launch vehicle, and we will carry out this research expedition from the new launch site of the Vostochny Cosmodrome independently,” Rogozin said.

Aschbacher said ESA would look into collaborating with NASA, which he says has expressed “very strong willingness” to work together on the mission.

ESA and NASA were the original ExoMars collaborators, but NASA dropped out in 2012 due to budgeting problems. Russia took NASA’s place in the project in 2013.

Dependent on Russia

The mission uses a number of Russian-made components — including the rockets. The 2016 launch used a Russian-made Proton-M rocket, the same type planned for the launch in September.

Many components of the mission’s rover are also Russian-made. That includes radioisotope heaters that are used to keep the rover warm at night on the surface of Mars.

David Parker, ESA’s Director of Human and Robotic Exploration, suggested future cooperation with Russia was not off the table.

Parker said that if cooperation with Russia was resumed, a mission could potentially launch in 2024.

If Europe continues without Russia, it will have to reconfigure the mission.

“Radical reconfigurations” of the mission that wouldn’t involve cooperation with Russia could potentially allow for a launch in 2026 or 2028, said Parker.

Mars will wait for us

“It’s been an agonizing decision for our council,” he said. “Literally hundreds of scientists and engineers across Europe, the United States and, yes, Russia, have worked tirelessly to overcome the technical challenges, the programmatic challenges and different cultures to get to the point where we have a spacecraft that would be ready to launch.”

But even if the mission takes longer to realize, he said, Mars will still be there.

“Mars is four and a half billion years old, so we’ll just have to wait a few more years for it to reveal all of its secrets and maybe answer this fundamental question: ‘Was there ever life on Mars?'” said Parker. “It’s a tough, bittersweet time.”



Read in source website

The ancient river delta in the Jezero Crater, where Ingenuity is headed, is filled with jagged cliffs, angled surfaces, projecting boulders, and sand-filled-pockets that could stop a rover in its tracks or even upend a helicopter upon landing.

NASA has extended flight operations for the first aircraft to operate on the surface of another planet through September. The helicopter will soon accompany the Perseverance rover and support its upcoming science campaign exploring the ancient river delta of Jezero Crater.

Ingenuity is a small robotic solar-powered helicopter that landed on Mars on February 18, 2021, when it was deployed along with the car-sized Perseverance rover. On April 19, just two months after landing, Ingenuity completed the world’s first powered extraterrestrial flight by taking off, hovering and landing for a flight duration of 39.1 seconds.

The space agency made the announcement after the aircraft’s 21st successful flight, which is the first of at least three needed for the helicopter to cross a portion of a region called “Séítah” on Mars to reach its next staging area.

“Less than a year ago we didn’t even know if powered, controlled flight of an aircraft at Mars was possible,” said Thomas Zurbuchen, an associate administrator at NASA, in a press statement. “Now, we are looking forward to Ingenuity’s involvement in Perseverance’s second science campaign. Such a transformation of mindset in such a short period is simply amazing, and one of the most historic in the annals of air and space exploration.”

Ingenuity’s upcoming missions will feature much more treacherous terrains the relatively flat terrain it has been flying over since its deployment. The ancient river delta in the Jezero Crater is fan-shaped and rises more than 130 feet (40 metres) above the crater floor.

According to NASA, it is filled with jagged cliffs, angled surfaces, projecting boulders, and sand-filled-pockets that could stop a rover in its tracks or even upend a helicopter upon landing. But the delta could potentially hold various geological revelations, and even proof necessary to determine whether microscopic life existed on Mars billions of years ago.

When it reaches the delta, the rotorcraft’s first order will be to determine which of two dry river channels Perseverance should take when it’s time to climb to the top of the delta. Apart from routing assistance, it will also provide data for the Perseverance team to assess potential science targets. Scientists might even call upon Ingenuity to image geological features outside the rover’s traversable zone.

According to Tedd Tzanetos, who is the Ingenuity team lead at NASA, the Jezero river delta campaign will be the biggest challenge faced by the team since all the flights on Mars. The space agency has increased the size of its Ingenuity team and has made upgrades to its flight software.



Read in source website

The roughly 36-million-year-old well-preserved skull was dug up intact last year from the bone-dry rocks of Peru's southern Ocucaje desert

Paleontologists have unearthed the skull of a ferocious marine predator, an ancient ancestor of modern-day whales, which once lived in a prehistoric ocean that covered part of what is now Peru, scientists announced on Thursday.

The roughly 36-million-year-old well-preserved skull was dug up intact last year from the bone-dry rocks of Peru’s southern Ocucaje desert, with rows of long, pointy teeth, Rodolfo Salas, chief of paleontology at Peru’s National University of San Marcos, told reporters at a news conference.

Scientists think the ancient mammal was a basilosaurus, part of the aquatic cetacean family, whose contemporary descendents include whales, dolphins and porpoises.

Basilosaurus means “king lizard,” although the animal was not a reptile, though its long body might have moved like a giant snake.

The one-time top predator likely measured some 12 meters (39 feet) long, or about the height of a four-story building.

“It was a marine monster,” said Salas, adding the skull, which has already been put on display at the university’s museum, may belong to a new species of basilosaurus.

“When it was searching for its food, it surely did a lot of damage,” added Salas.

Scientists believe the first cetaceans evolved from mammals that lived on land some 55 million years ago, about 10 million years after an asteroid struck just off what is now Mexico’s Yucatan peninsula, wiping out most life on Earth, including the dinosaurs.

Salas explained that when the ancient basilosaurus died, its skull likely sunk to the bottom of the sea floor, where it was quickly buried and preserved.

“Back during this age, the conditions for fossilization were very good in Ocucaje,” he said.

Reporting by Marco Aquino and Carlos Valdez; Writing by David Alire Garcia; Editing by Karishma Singh



Read in source website

Pete Davidson has bowed out of a short ride to space on a Jeff Bezos rocket. The 'Saturday Night Live' star is no longer able to make the flight, which has been delayed for nearly a week

Pete Davidson has bowed out of a short ride to space on a Jeff Bezos rocket. The ‘Saturday Night Live’ star is no longer able to make the flight, which has been delayed for nearly a week, Bezos’ space travel company said Thursday night. No other details were provided.

The company announced earlier this week that Davidson would be one of six passengers on Blue Origin’s next flight. It had been scheduled for next Wednesday, but has now been shifted to March 29 for more testing, the company said.

Davidson would have been the third celebrity to climb aboard a Blue Origin automated capsule for the 10-minute flight from West Texas. Actor William Shatner and former NFL great and ‘Good Morning America’ co-host Michael Strahan took a flight last year. Bezos was on his company’s first flight with passengers last July.

The company said it will announce Davidson’s replacement to join the five paying passengers in the coming days. Davidson was going as Bezos’ guest. The company has not disclosed the ticket price for paying customers.

Davidson, who is currently dating reality star Kim Kardashian, wrote and starred in the semi-autobiographical film ‘The King of Staten Island,’ which was released in 2020.



Read in source website

The new images clicked by the James Webb Space Telescope are the highest resolution infrared images taken from space ever.

NASA has released new images from the James Webb Space Telescope (JWST) that confirms that Webb’s optical performance will be able to meet or exceed the science goals of the project. The images include a ‘selfie’ clicked by the telescope, which shows the progress of mirror alignment.

“We got together and looked at the very first diffraction-emitted images that came out of the Webb Telescope and what we collective saw as a group is we have the highest resolution infrared images taken from space ever,” said scientist Scott Acton in a video released by NASA.

Webb scientists completed a stage of mirror alignment known as ‘fine phasing’ on March 11. At the stage of fine phasing, each of the primary mirror segments was adjusted to produce one unified image of a single bright star using only the NIRCam instrument. The NIRCam or Near-Infrared Camera is JWST’s primary imager.

The team found that all optical parameters have been checked and tested and that they are performing at or above expectations. They also found no critical issues and measurable contamination or blockages to Webb’s optical path. The telescope is able to successfully gather light from distant objects and deliver it to instruments.

“In addition to enabling the incredible science that Webb will achieve, the teams that designed, built, tested, launched, and now operate this observatory have pioneered a new way to build space telescopes,” said Lee Feinberg, a Webb optical telescope element manager at NASA in the space agency’s blog.

After the fine phasing stage of alignment, JWST engineers have fully aligned NIRCam to the telescope’s mirrors.



Read in source website

"The opinion has been that this machine is 10 to 20 years away. But in the intelligence world, people are now worried it will be within five years," said Andersen Cheng, founder and chief executive officer of quantum-encryption firm Post Quantum.

Written by Parmy Olson 

Investment and new milestones in quantum computing are bringing the prospect of an ultra-powerful computer that can crack any code closer to reality. Alphabet Inc’s Google and International Business Machines Corp. are racing to increase the number of qubits — the quantum equivalent of bits that encode data on classical computers — on a quantum chip. Firms like Canada’s D-Wave Systems Inc. and French startup Alice&Bob are offering quantum computing services to clients that want broad processing power to solve complex problems.

But any technological advance comes with concerns. While a fully-fledged quantum computer doesn’t appear to exist yet, there is already worry about its ability to crack encryption underpinning critical communications between companies and between armed forces.

Andersen Cheng, founder and chief executive officer of London quantum-encryption firm Post Quantum, joined me on Twitter Spaces on Wednesday to talk about why NATO, banks and other entities need to prepare for a world where “quantum attacks” are possible. Here is an edited transcript of our conversation.

Parmy Olson: How significant is the prospect of quantum computers usurping the machines we use today?

Andersen Cheng: It’s going to impact every single one of us. I trained as a computer auditor over 30 years ago so I have seen enough in cybersecurity, and the biggest existential threat we are facing now is a quantum attack. Remember a few months ago when Facebook, WhatsApp and Instagram went dark for a few hours? Imagine if they went dark and never came back up? Or what if we couldn’t buy our stuff on Amazon? That is the thing we have to worry about in terms of what a quantum machine can do.

One thing that is now emerging is the possibility of a quantum machine that can also crack encryption. When a quantum machine comes in, it’ll be like an x-ray machine. A hacker no longer needs to steal my wallet. All they have to do is to go to the lock on your front door and take an X-ray image of it. They then know what the key looks like and can replicate it.

PO: Machines today can’t crack the encryption underpinning networks like Facebook Messenger, WhatsApp and Signal. Can the quantum-computing services provided by IBM or D-Wave already do that?

AC: No. We cannot tell at this point if someone has already got the first functioning quantum machine somewhere. All the computers we’re using today are what we call classical computers. A quantum machine cannot do very complicated computation, but it can do millions of tries in one go. A quantum machine is useless in doing 99% of the work that we see today, but it’s extremely fast in doing many very simple tries simultaneously.

The opinion has been that this machine is 10 to 20 years away. But in the intelligence world, people are now worried it will be within five years. There’s been more urgency in the last two and a half years. This is why you see a lot more initiatives going on now in terms of claiming quantum supremacy. Nation states have put billions of dollars into building a quantum machine. There have been several lab-based breakthroughs in the past few years, which have got people worried.

PO: Let’s say somebody gets hold of a quantum computer that can break encryption. What could they do?

AC: One option is a harvest-now-and-decrypt-later attack. Right now I’m using my iPhone, using a public key that is encrypted. If someone is trying to intercept and store our information, they are just harvesting it. They cannot decrypt it today. But one day they could open up all the secrets [with a quantum computer].

PO: NATO has started experimenting with your virtual private network which has quantum encryption embedded into it. Why are they trialing this?

AC: The current algorithms we use inside a VPN (a tool used to securely tunnel into a corporate network or through a national firewall) either use a standard from RSA Laboratories or elliptic-curve cryptography. Neither are quantum safe.

PO: Meaning they could be cracked by a quantum computer?

AC: Correct. If you start collecting my data, one day with a quantum machine you could actually crack [the passwords protecting it]. That is the worry from a lot of organizations. NATO has got 30 members states so interoperability is important. If you send allied troops into Ukraine, they have to talk to each other. Since different armies use different communication protocols, you have to think about the harvest-now-decrypt-later risk. So this is why they are at forefront of looking for a quantum-safe solution.

PO: What else is at risk from a quantum attack?

AC: Bitcoin and the blockchain. I would say 99% of all cryptocurrencies are using elliptic-curve cryptography, which is not quantum safe. Whoever’s got the first working machine will be able to recover hundreds of billions of dollars worth of cryptocurrency.

PO: Which countries are on the forefront of using quantum encryption?

AC: Canada (where quantum computing firm D-Wave Systems is based) is at the forefront of quantum innovation. Then Australia, the Netherlands, France, the U.K. and then you have the U.S. In 2017, Donald Trump made an executive order for a $1.2 billion quantum computing initiative. That’s actually nothing compared to other nation states. China has openly committed between $12 billion and $15 billion to quantum supremacy. France has committed 1.8 billion euros ($2 billion) to quantum.

PO: What about the commercial sector?

AC: The American commercial sector has been very innovative with quantum computing, including Google, IBM, Honeywell International Inc.

I cannot name names but some of the largest banks are all quietly building up what we call the PQC teams, or the post-quantum crypto teams, to prepare for the migration. Some of them do see it as an existential threat and they also see it as a marketing advantage to tell customers they are quantum-safe. I know one of the largest systems integrators in the world has committed $200 million to build out a quantum consulting division. They see this as like Y2K happening every month in the next 10 years.

PO: Y2K refers to when everybody thought the world’s computers would blow up when the date changed on Jan. 1, 2000.

AC: It was a once-in-a-lifetime event which did not happen. I was working for JP Morgan Chase & Co. at the time on the Y2K migration committee. Three days after Jan. 1, Sandy Warner, then-CEO, sent an email to every employee saying, “Wow, we only spent $286 million on Y2K and nothing happened, so we are very pleased.”

PO: How much of the worries over quantum are being overblown by consultants keen to earn fees to set up these new systems? Bearing in mind you’re in this market too.

AC: The consultants are thinking Christmas has come early. Everyone’s been procrastinating until NIST (Maryland-based National Institute of Standards and Technology) updated its standards to include quantum cryptography. I believe the first wave of huge revenues will go to consulting firms, and then the next wave will come down to vendors like us.



Read in source website

Read more to find out how SpaceX grew from a propulsion engineer's hobby to an industry giant that rivals national space agencies.

Elon Musk’s SpaceX marks twenty years today. It has become one of the biggest private space companies in the world and  achieved some key milestones as well. For one, SpaceX is the first private company to launch, orbit, and recover a spacecraft. It is also the first private company to send astronauts to orbit and to the International Space Station (ISS). It is also trying to build its satellite internet service with Starlink, which uses ‘mega-constellations’ of small satellites for the same.

We take a look at SpaceX’s key achievements over the past 20 years.

Falcon 1 and NASA

In 2006, NASA awarded a “Commercial Orbital Transportation Services” (COTS) contract to SpaceX, where the company had to demonstrate cargo delivery capabilities to the ISS with a contract option for crew transport. After multiple failed launches in 2006 and 2008, SpaceX successfully launched its Falcon 1 launch vehicle on September 28 2008, making it the first privately-developed liquid-fueled rocket to reach orbit.

It also became the first of its kind to put a commercial satellite in orbit when it deployed Malaysian satellite RazakSAT in July 2009.

In December 2010, the SpaceX’s Dragon reusable spacecraft completed two successful flights that completed all mission objectives of COTS Demo Flight 1. This made SpaceX the first private company to successfully launch, orbit, and recover a spacecraft.

Dragon C2, Falcon 9 reusable heavy-lift launch vehicle

In May 2012, the Dragon C2+ became the first privately manufactured spacecraft to deliver cargo to ISS.  SpaceX also began prototyping the Falcon 9, a reusable heavy-lift launch vehicle the same year.  The company announced the development of its Starlink service in January 2015, promising to deliver high-speed low-latency broadband internet to users across terrains.

In 2015, SpaceX also saw one of its biggest failures when a Falcon 9 vehicle exploded just two minutes after launch. But it achieved its first successful landing at sea in December 2015 with Falcon 9 Flight 20.

Starlink and launching a Tesla into orbit

SpaceX was affected by a second major failure in September 2016 when a Falcon 9 rocket exploded, destroying a satellite payload worth over $200 million.

In February 2018, SpaceX conducted Falcon Heavy test flights, launching Elon Musk’s Tesla Roadster into space with a mannequin wearing a spacesuit seated in the front. It was the first private spacecraft launched into a heliocentric orbit.

In May 2019, SpaceX launched a constellation of 60 Starlink satellites on a Falcon 9 rocket.

Crew Dragon, Starship prototype and Inspiration4 mission

SpaceX made history in May 2020 when it became the first private company to send astronauts to ISS after successfully launching two NASA astronauts into orbit on a Crew Dragon spacecraft.

In 2021, it broke the record for the highest number of satellites launched in one mission when it launched 143 satellites on Falcon 1 in January.

On May 5, 2021, the company launched and successfully landed a prototype of its Starship rocket, designed as a reusable transportation system that can carry crew and cargo to Earth’s orbit, the Moon and Mars.

In September 2021, SpaceX launched the Inspiration4 mission, successfully completing the first orbital spaceflight mission with only private citizens on board. The spacecraft was also the first spacecraft to orbit without any crew member with previous experience since the Chinese spacecraft Shenzhou 7 in 2008. All crew members of the mission received training from SpaceX.



Read in source website

Centuries ago, North America had anywhere from 250,000 to 2 million gray wolves. When settlers arrived, they quickly decimated the wolves’ native prey of bison, elk and deer, and then replaced them with livestock.

Written by Hillary Richard

Kent Laudon, a wolf biologist with the California Department of Fish and Wildlife, woke up one morning last year to a flurry of text messages from a rancher in the state’s northernmost county. He was asking about a post with wildly specific details spreading across Facebook that urged people to find a red truck that was transporting breeding wolves along Route 97 into Siskiyou County, California. Laudon was not surprised. This wasn’t the first post of its kind, and it wouldn’t be the last.

“Wolves make people crazy,” he said of these persistent rumors. “And for the record: No, we’re not importing wolves. That never happened.”

Wolves don’t need to be dropped off in California because they are returning on their own. The last of the state’s original wild wolves was killed by a hunter in Lassen County in Northern California in 1924. Since 2011, a series of roving canids have come and gone. Now it seems that in the state’s far-north counties, families of wolves are there to stay, with a relatively stable population of about 20 wolves. That number may fluctuate once spring begins and new pups emerge from their dens, but California can probably expect to have wolves calling the state home for years to come.

Their return is motivating conservationists and scientists like Laudon to battle misinformation and the deep politicization of the species. Simultaneously, biologists are learning more about their habits in an effort to help humans and wolves coexist.

Centuries ago, North America had anywhere from 250,000 to 2 million gray wolves. When settlers arrived, they quickly decimated the wolves’ native prey of bison, elk and deer, and then replaced them with livestock. California’s wolves were no exception.

But experts agree it was only a matter of time before wolves returned.

When wolves go in search of mates and their own territory, they disperse from their packs on remarkable journeys. A wolf named OR-7 roamed California for 15 months starting in December 2011. His radio collar recorded around 4,000 miles in his quest for a partner; he eventually found one in Oregon, his home state. One of his daughters, OR-54, traveled over 8,700 miles, including a trip to the Lake Tahoe Basin.

Last year, a 2-year-old lone wolf broke records when he traveled through the Central Coast of California, the first known to do so in over a century. The wolf, named OR-93, wandered from the Mount Hood area of Oregon to San Luis Obispo County, California. In November, he was hit by a car 50 miles north of Los Angeles after traveling over 1,000 miles through the state.

While scientists believe that other uncollared wolves have been roaming wide swaths of the state largely undetected, wolves did not stay put in California until recently.

In 2015, the state briefly became home to its first modern wolf pack when a pair of wolves from Oregon arrived in the Shasta County area. The “Shasta Pack” were the first wild wolves to settle in California since the species’ eradication in the state, which took place in the same area. When the Shasta Pack mysteriously disappeared months later after one litter, California was again without wolves.

In 2017, a new wolf pack took up residence over a 500-mile area where western Lassen and northern Plumas counties meet. The “Lassen Pack” has had successful litters every year since its arrival. In November 2020, two new wolves arrived to the state, creating the “Whaleback Pair” — and their new pups — which now occupy 480 square miles in eastern Siskiyou County. Last May, biologists discovered the “Beckwourth Pack” in eastern Plumas County, led by a 2-year-old female from the Lassen Pack.

There are an estimated 6,000 wolves in the lower 48 states. California’s current wolves dispersed from three modern populations: Yellowstone, Idaho and northwest Montana. Wolves entered Montana on their own but were hunted relentlessly. They were reintroduced to Yellowstone National Park and central Idaho though Canada in the 1990s. From there, some dispersed to Washington state. Oregon’s first pack arrived in 2009. A trip south into California was inevitable.

“For the most part, California has really laid out the welcome mat for wolves. When OR-7 came in 2011, it was an enormous celebratory moment,” said Amaroq Weiss, a wolf biologist with the Center for Biological Diversity. “We’ve seen the same spike of excitement with every new wolf that has come into California. People are drawn to the story of a lone individual seeking a mate or going on an adventure in a place where his species hasn’t been for years.”

However inviting California has been, the state’s landscape looks very different than it did a century ago when its last wild wolves were wiped out. The number of people living in the state’s remote north has doubled since then.

And where there are people living, working and farming, wolves often have a bad reputation.

“Wolves have been politicized because they are right in the middle of this divide between rural and urban, and this divide we have in the country between one set of facts and another,” Laudon said.

The gray wolf was removed from the federal endangered species list in the final months of the Trump administration. Weeks later, in February 2021, Wisconsin hunters killed 218 wolves in 60 hours, exceeding a seasonlong hunting quota of 119. That obliterated nearly 20% of the entire state’s wolf population in less than three days (illegal poaching might have killed more). Wildlife groups and Ojibwe tribes sued in response, and the November 2021 hunt was put on hold.

Then in February, a federal judge in California restored the wolves’ federal protection, which will end hunts like the one in Wisconsin for now.

But even with the protections restored, the ruling excludes wolves in much of the northern Rocky Mountain regions. Because of their higher populations, wolves in Montana, Idaho, Wyoming and parts of Washington, Oregon and Utah were not included in the scope of the decision. For now, these wolves will still be managed by their respective states.

In 2021, lawmakers in Idaho signed a bill that allowed almost no restrictions on how roughly 1,500 wolves in the state were to be hunted, and the purchase of unlimited wolf hunting permits. In addition to approving neck snares, baiting and nighttime hunting, a new law in Montana allows bounties on wolves, much like the early 20th century practices that endangered the species in the first place.

In recent months, Yellowstone National Park officials were distressed to learn at least 20 gray wolves were killed after wandering out of park boundaries onto state land in Montana, Wyoming and Idaho. That is the highest number of hunting season deaths since the species was reintroduced to the area in 1995. Now, there are fewer than 100 wolves in the park.

“The wolf is a surrogate for people’s hatred against government intervention because they’ve been protected. People see protecting wolves as a symbol of everything they hate about the government telling them what they can and can’t do,” Weiss said.

In contrast, California, a state that has both extremely rural and extremely urban areas, has one of the strongest state endangered species acts in the nation. It is a crime to kill a wolf in California.

Where the wolves roam, the state’s fish and wildlife agency tracks their whereabouts and collects blood samples, DNA samples, weight statistics and health information whenever possible to gain a better understanding of who stays, who leaves and where they settle. Some wolves are fitted with satellite modems attached to neck collars. California and Oregon’s fish and wildlife departments speak regularly about individual wolves and share their collar data. Occasionally uncollared wolves pop up on trail cameras or through DNA samples in California, typically in Lassen, Modoc, Plumas and Siskiyou counties.

The wolves even managed to survive the Dixie wildfire in California, the second-largest in the state’s history, which swept through their territories and burned nearly 1 million acres last summer.

But that doesn’t mean everyone is happy about wolves returning. An important part of Laudon’s job is battling the wolves’ bad reputation. He tries to break down barriers by presenting information in a nonthreatening way that allows people to make their own decisions. Sometimes it works.

Dusty de Braga is a contract grazer who manages cattle across 200,000 acres of Lassen and Plumas counties. When he first heard that wolves were back in California, he assumed they were being imported.

“It seemed fishy to me,” he said. After seeing data on how far the collared wolves traveled, he changed his mind.

“Now I think it’s not out of the realm of possibility they naturally dispersed,” he said, but he added that plenty of other people were still convinced that state wildlife officials brought them in.

De Braga has seen wolves semiregularly since they arrived. He estimates that between his herds and the herds of his two closest neighbors, wolves have killed over 20 cows and calves in the last five years. Some, but not all, have been confirmed by the Department of Fish and Wildlife.

“Wolves are new here. When it’s new it’s the hardest. Any time wolves kill something, that’s what gets in the paper. For 363 days of the year, it’s fine. Two days wolves screw up, and it makes the news,” Laudon said. “It fosters the notion that they’re this really damaging critter, and the good news is, usually wolves aren’t anywhere near that bad.”

This article originally appeared in The New York Times.



Read in source website

The ability to extract oxygen and other useable materials from lunar regolith will be a game-changer for lunar exploration.

The European Space Agency (ESA) announced the winning industrial team that will design and build an experimental payload to extract oxygen from the regolith (lunar soil) on the surface of the Moon.The team is led by UK-based Thales Alenia Space and is tasked with producing a small solar-powered prototype device that will be used to evaluate the prospect of building oxygen-generation plants on the moon.

This could be useful for generating oxygen that can be used as a propellant and for astronauts to breathe.

The compact payload designed by the team will have to extract between 50 and 100 grams of oxygen from the lunar regolith while targeting the extraction of 70 per cent of all available oxygen within the sample. The device will also have to do all this within a period of ten days, which is how long solar power will be available within a single lunar day before the pitch-black and freezing lunar night.

The payload is also required to be low power and able to fly on many different lunar landers including ESA’s own European Large Logistics Lander, EL3.

The winning team that also consists of AVS, Metalysis, Open University and Redwire Space Europe was selected by the ESA’s Directorate of Human and Robotic Exploration in 2021 after conducting a detailed study including three rival designs.

According to David Binns, Systems Engineer at ESA’s Concurrent Design Facility (CDF), the ability to extract oxygen and other useable materials from lunar regolith will be a game-changer for lunar exploration, allowing astronauts to ‘live off the land’ without depending on long and expensive supply lines from the earth.

It had already previously been discovered that lunar regolith from the moon’s surface consists of 40-45% oxygen by weight. The problem is that this oxygen is bound up with other chemicals as oxides in the form of minerals or glass, making it unavailable for use without processing.

In 2020, the ESA had set up a prototype oxygen plant in the Materials and Electrical Components Laboratory of the European Space Research and Technology Centre, ESTEC, based in the Netherlands. The lab is used to extract oxygen from simulated regolith to fine-tune the process for efficiency.



Read in source website

During the research, not only was it found that ants can distinguish between cancerous and non-cancerous cells, but they could also distinguish between cells from two different cancerous lines.

While using dogs to detect the presence of cancer in cells is a well-documented concept, researchers at Université Sorbonne Paris Nord and PSL Research University in France have found out that ants can do the same as accurately as dogs while taking far less time to be trained.

Cancer cells are different from normal cells and have particular abilities that cause them to produce volatile organic compounds (VOCs) that can act as biomarkers for cancer diagnosis when using gas chromatography or artificial olfactory systems.

But the results of gas chromatography analysis are extremely variable and ‘E-noses’ (artificial olfactory systems) are still yet to reach a viable prototype stage where a system that is cost-effective and accurate enough is on the horizon.

This is why the noses of animals like dogs are extremely well-suited for detecting the VOCs produced by cancerous cells and thereby, detecting cancer biomarkers. Dogs have evolved their olfactory senses over millions of years of evolution and have the ability to detect extremely faint odours as well as the brainpower to distinguish and determine between them.

But it takes months of training and conditioning before a dog can successfully distinguish between cancerous and non-cancerous cells and hundreds of time-consuming trials. For example, in one study, it took two dogs, 5 months of training and 1,531 conditioning trials to perform 31 tests with 90.3% accuracy.

Armed with earlier evidence that insects could also use odour to detect cancer cells, researchers combined the use of ants with a ‘low-cost, easily transferable, behavioural analysis’ to create a bio-detector tool for cancer VOCs.

According to the research paper published in iScience, researchers submitted 36 individual F. fusca ants to three training trials where they were put in a circular arena where the odour of a human cancer cell sample was associated with a reward of sugar solution.

Over the trials, the time that the ants needed to find the reward decreased, indicating that they have been trained to detect the presence of cells based on their emittance of VOCs. This was confirmed by ants performing two consecutive memory tests with no reward present.

During the research, not only was it found that ants can distinguish between cancerous and non-cancerous cells, but they could also distinguish between cells from two different cancerous lines.

The short training time and the fact that ants can reproduce easily makes their use as bio-detectors for cancerous cells’ VOCs more viable than training and testing dogs or other larger animals with a great sense of smell.



Read in source website

Built around a modular payload system inspired by conventional containerized shipping, FLEX is reportedly versatile enough to be used for exploration, cargo delivery, site construction and other logistical work on the moon

A Los Angeles-area startup founded by a veteran spaceflight robotics engineer unveiled on Thursday its full-scale, working prototype for a next-generation lunar rover that is just as fast as NASA’s old “moon buggy” but is designed to do much more.

The company, Venturi Astrolab Inc, released photos and video showing its Flexible Logistics and Exploration (FLEX) vehicle riding over the rugged California desert near Death Valley National Park during a five-day field test in December.

Astrolab executives say the four-wheeled, car-sized FLEX rover is designed for use in NASA’s Artemis program, aimed at returning humans to the moon as early as 2025 and establishing a long-term lunar colony as a precursor to sending astronauts to Mars.

Unlike the 1970s Apollo-era moon buggies or the current generation of robotic Mars rovers tailored for specialized tasks and experiments, FLEX is designed as an all-purpose vehicle that can be driven by astronauts or by remote control.

Built around a modular payload system inspired by conventional containerized shipping, FLEX is versatile enough to be used for exploration, cargo delivery, site construction and other logistical work on the moon, the company says.

“For humanity to truly live and operate in a sustained way off Earth, there needs to exist an efficient and economical network all the way from the launch pad to the ultimate outpost,” Astrolab founder and CEO Jaret Matthews said in a statement announcing the rover’s development.

Other aerospace companies have announced new lunar rover design concepts, “but so far I believe, we’re the only ones who have produced a working prototype of this scale and capability,” Matthews told Reuters in an interview on Wednesday.

If NASA adopts FLEX and its modular payload platform for Artemis, it would become the first passenger-capable rover to ply the lunar surface since Apollo 17, the last of six original U.S. manned missions to the moon, in December 1972.

Apollo 17’s lunar roving vehicle set a moon speed record of 11 miles per hour (17.7 km/h). FLEX can move just as swiftly.

Apollo astronauts found “they spent just as much time off the ground as on it at that speed, so it’s kind of a practical limit for the moon,” where gravity is one-sixth that of Earth, said Matthews, a former rover engineer for NASA’s Jet Propulsion Laboratory.

While Apollo LRVs carried up to two astronauts seated at its controls like a car, FLEX passengers – one or two at a time – ride standing in the back, driving the vehicle with a joystick.

With its solar-powered batteries fully charged, the vehicle can drive two astronauts for eight hours straight and has sufficient energy capacity to survive the extreme cold of a lunar night, up to 300 hours in total darkness, at the moon’s south pole, Matthews said.

During the field test at the Dumont Dunes Off-Highway Recreation Area, the rover was piloted by retired Canadian astronaut and Astrolab advisory board member Chris Hadfield and MIT aerospace graduate student Michelle Lin.

Video showed the pair dressed in mock spacesuits riding on the vehicle over a sand dune and using it to set up a large, vertical solar array. “It was huge fun to drive the FLEX,” Hadfield said in the video.



Read in source website

Archaeological records sit well with the new genetic evidence.

A new study in Nature reports on the DNA recovered from six individuals from southeastern Africa who lived between 18-5 kya (thousand years ago). A notable finding, the study is one of the few to have reported on ancient DNA from the continent, where hot and humid conditions are not conducive to the preservation of genetic material.

Lipson et al. (2022) reported the entire genetic sequence, along with the radiocarbon dates, of three Late Pleistocene (125-12 kya) and three early-mid Holocene (11-5 kya) individuals (a total of six-four infants, two adults). These six individuals were spread across five sites in eastern and southern-central Africa: Tanzania, Malawi and Zambia, to be precise. These individuals range from 18 to 5 kya, ‘doubling the time depth of aDNA reported from sub-Saharan Africa.’ The exercise was supplemented by previously published studies.

The DNA was sourced from the petrous bone of the inner ear. The petrous is one of the hardest and most dense bones in the body and preserves genetic material better than any other. A 2015 study even reported over 100 times more DNA yield from the bone than any other. Ancient DNA yielded from the petrous has helped shed light on the first farmers in Turkey, ancestry in Oceania and diaspora in Tanzania, among other things.

The researchers tapped into the insights offered by uniparental markers i.e. those components of an individual’s genetic material that come from only one parent. Uniparental markers are passed down ‘as is’ from one generation to the next ie the set of traits are passed down together as a single ‘haplotype.’ Uniparental markers are, therefore, extremely useful in reconstructing lineages through deep time. Two of the most commonly studied uniparental markers are the Y-chromosome, which follows a strict paternal inheritance, and mitochondrial DNA (mtDNA), which follows a strict maternal inheritance.

Based on the aforementioned uniparental marker analysis, Lipson et al (2022) find that (a) specimens from Kenya and Tanzania have haplotypes/haplogroups associated with East Africa; (b) those from Malawi and Zambia have haplogroups associated with ‘some ancient and present-day Southern African people,’ especially those still engaged in foraging; and (c) one individual from Malawi and [maybe] one from Kenya carries haplogroups of present-day central African foragers. In the past, these haplogroup populations were much more widespread than they are today.

Researchers identified three distinct ancestries, with a distinct geographical structure: one in East Africa, one in southern Africa (not to be confused with South Africa) and another in Central African rainforests. The genetic structures remained highly stable and localised vis-à-vis their geographies, and there was limited gene flow. These distinct genetic structures have since been masked over the last 5000 years by migrations driven by the transition to sedentary agriculture, and even more so in the recent past due to imperialism and changing socio-politics. Therefore, it is difficult to reconstruct demographic changes in the past from modern DNA alone and is, therefore, tap into ancient DNA wherever possible.

The three ancestries – central African, Eastern and Southern – are by-and-large present from southwestern Kenya to southeastern Zambia. By 16 kya, all these three components were present in Malawi and, by 7 kya, in Tanzania.

While the three ancestries are present in different proportions, ‘geographical proximity remains the strongest predictor of genetic similarity.”  It shows that these groups were essentially separated 200 kya but came in contact with each other 80-50 kya. This led authors to conclude that long-range movements of people were probably rare in terminal Pleistocene/Holocene. The same is evidenced by signals in the admixture analysis, that examines the amount of connectedness between two gene pools: admixture graphs showed high genetic relatedness at a localised level but not over long distances. For instance, within the three ancestries, individuals in one cluster showed ‘excess allele sharing, even beyond what would be expected from having similar ancestry proportions.’

These lineages, after 10 kya, were possibly brought together by fragmenting forests and expanding grasslands, that left more room for people to move around.

The archaeological record sits well with the genetic evidence. Most records of material culture can be distinctly identified in space and time (‘regionalisation’). Even linguistic data suggests a transition towards local interactions – to this day, foraging communities in central, eastern and southern Africa speak languages belonging to different families (they do bear some similarities, of course).

‘Our genetic results confirm that trends toward regionalisation extended to human population structure, suggesting that decreasing gene flow accompanied changes in behaviour and possibly language,’ researchers argue.

The author is a research fellow at the Indian Institute of Science (IISc), Bengaluru, and a freelance science communicator. He tweets at @critvik 



Read in source website

David Bennett, 57, died on March 8 at the University of Maryland Medical Center.

The first person to receive a heart transplant from a pig has died, two months after the groundbreaking experiment, the Maryland hospital that performed the surgery announced on March 9.

David Bennett, 57, died on March 8 at the University of Maryland Medical Center. Doctors didn’t give an exact cause of death, saying only that his condition had begun deteriorating several days earlier.

Mr. Bennett's son praised the hospital for offering the last-ditch experiment, saying the family hoped it would help further efforts to end the organ shortage.

“We are grateful for every innovative moment, every crazy dream, every sleepless night that went into this historic effort,” David Bennett Jr. said in a statement released by the University of Maryland School of Medicine. “We hope this story can be the beginning of hope and not the end.”

Doctors for decades have sought to one day use animal organs for life-saving transplants. Bennett, a handyman from Hagerstown, Maryland, was a candidate for this newest attempt only because he otherwise faced certain death — ineligible for a human heart transplant, bedridden and on life support, and out of other options.

After the January 7 operation, Mr. Bennett's son told The Associated Press his father knew there was no guarantee it would work.

Prior attempts at such transplants — or xenotransplantation — have failed largely because patients’ bodies rapidly rejected the animal organ. This time, the Maryland surgeons used a heart from a gene-edited pig: Scientists had modified the animal to remove pig genes that trigger the hyper-fast rejection and add human genes to help the body accept the organ.

At first the pig heart was functioning, and the Maryland hospital issued periodic updates that Mr. Bennett seemed to be slowly recovering. In February 2022, the hospital released video of him watching the Super Bowl from his hospital bed while working with his physical therapist.

Mr. Bennett survived significantly longer with the gene-edited pig heart than one of the last milestones in xenotransplantation — when Baby Fae, a dying California infant, lived 21 days with a baboon's heart in 1984.

“We are devastated by the loss of Mr. Bennett. He proved to be a brave and noble patient who fought all the way to the end,” Dr. Bartley Griffith, who performed the surgery at the Baltimore hospital, said in a statement.

The need for another source of organs is huge. More than 41,000 transplants were performed in the U.S. last year, a record — including about 3,800 heart transplants. But more than 106,000 people remain on the national waiting list, thousands die every year before getting an organ and thousands more never even get added to the list, considered too much of a long shot.

The Food and Drug Administration had allowed the dramatic Maryland experiment under “compassionate use” rules for emergency situations. Bennett’s doctors said he had heart failure and an irregular heartbeat, plus a history of not complying with medical instructions. He was deemed ineligible for a human heart transplant that requires strict use of immune-suppressing medicines, or the remaining alternative, an implanted heart pump.

Doctors didn't reveal the exact cause of Mr. Bennett's death. Rejection, infection and other complications are risks for transplant recipients.

But from Mr. Bennett's experience, "we have gained invaluable insights learning that the genetically modified pig heart can function well within the human body while the immune system is adequately suppressed”, said Dr. Muhammad Mohiuddin, scientific director of the Maryland university’s animal-to-human transplant programme.

One next question is whether scientists have learned enough from Mr. Bennett's experience and some other recent experiments with gene-edited pig organs to persuade the FDA to allow a clinical trial — possibly with an organ such as a kidney that isn’t immediately fatal if it fails.

Twice last year, surgeons at New York University got permission from the families of deceased individuals to temporarily attach a gene-edited pig kidney to blood vessels outside the body and watch them work before ending life support. And surgeons at the University of Alabama at Birmingham went a step further, transplanting a pair of gene-edited pig kidneys into a brain-dead man in a step-by-step rehearsal for an operation they hope to try in living patients possibly later this year.

Pigs have long been used in human medicine, including pig skin grafts and implantation of pig heart valves. But transplanting entire organs is much more complex than using highly processed tissue. The gene-edited pigs used in these experiments were provided by Revivicor, a subsidiary of United Therapeutics, one of several biotech companies in the running to develop suitable pig organs for potential human transplant.



Read in source website

Meat from gene-edited cattle could be on the way in a few years

U.S. regulators on Monday cleared the way for the sale of beef from gene-edited cattle in coming years after the Food and Drug Administration concluded the animals do not raise any safety concerns.

The cattle by Recombinetics are the third genetically altered animals given the green light for human consumption in the U.S. after salmon and pigs. Many other foods already are made with genetically modified ingredients from crops like soybeans and corn.

The cattle reviewed by the FDA had genes altered with a technology called CRISPR to have short, slick coats that let them more easily withstand hot weather. Cattle that aren’t stressed by heat might pack on weight more easily, making for more efficient meat production.

The company did not say when home cooks or restaurants might be able to buy the beef, but the FDA said it could reach the market in as early as two years.

Unlike the salmon and pigs, the cattle did not have to go through a yearslong approval process. The FDA said the cattle were exempt from that because their genetic makeup is similar to other existing cattle and the trait can be found naturally in some breeds.

Dr. Steven Solomon, director of the FDA’s Center for Veterinary Medicine, said the agency’s review of Recombinetics' cattle took several months. He said there’s no reason meat from the animals or their offspring would need to be labeled differently.

Solomon said a genetically altered animal marketed as having a special advantage — such a higher than normal ability to withstand heat — might need to go through the full approval process.

“This opens up a completely different pathway,” he said, noting the decision could be encouraging for other biotech companies, many of which are small startups.

The gene-edited trait in the Recombinetics cattle can be passed down so semen and embryos from them could be used to produce offspring with the same shorter coats.

The trait will make beef production more sustainable and to improve animal welfare in warmer climates, Recombinetics said in a statement without providing further details.

Greg Jaffe, who specializes in biotechnology at the Center for Science in the Public Interest, said the FDA’s announcement made clear it wasn't exempting all gene-edited animals from the longer approval process.

“They reinforce the idea that this is a case-by-case review,” Mr. Jaffe said.

He said the agency should be more transparent about the review process so people know what's in the works. That could lead to better public acceptance and minimize any potential economic disruptions from global trade, since other countries might consider the animals genetically modified foods that need to be labeled, he said.

Jaydee Hanson, of the Center for Food Safety, said the agency should keep track of the animals for several generations to ensure there aren’t any unintended issues.

The genetically modified pig is intended mainly for medical purposes, not meat, according to the company that developed it. The firm recently provided a pig heart that was transplanted into a dying man in an experimental surgery.

The company behind the modified salmon said the fish are being sold to distributors in the Midwest and Northeast.

Alison Van Eenennaam, an animal geneticist at University of California, Davis who has worked with Recombinetics, said requiring all companies to go through the lengthy approval process could end the possibility of commercializing gene-edited animals in the U.S.

For the gene-edited cattle cleared by the FDA, she said it could take about two years for beef from the offspring to reach the market.

Once the semen is used to create embryos, she said gestation would take about nine months and the resulting calves might be slaughtered after about 10 months. She noted the market isn’t limited to the U.S., given the way cattle are bred.



Read in source website

Peak season overlapped with outbreak of third wave of pandemic in Punjab, says official

The unconducive weather conditions in January and early February of 2022 may have made it difficult for the bird lovers this season to conveniently sight the winter migratory waterbirds, which make their way to different wetlands of Punjab and other parts of the country through the central Asian flyway. But an encouraging trend of waterbirds and species diversity has been observed from the wetlands.

Every winter, the birds make their way to India through the central Asian flyway, which covers a large continental area of Europe–Asia between the Arctic and the Indian Oceans.

Every year, the Wildlife Department of Forests and Wildlife Preservation, Punjab, conducts waterbirds census exercise in six major and most biodiverse wetlands, which include the Nangal Wildlife Sanctuary, the Ropar Conservation Reserve, the Harike Wildlife Sanctuary, the Kanjli Wetland, the Keshopur-Miani Community Reserve and the Ranjit Sagar Conservation Reserve.

Hhowever, the census could not be done this year on account of dense fog conditions. Instead a “species richness” survey was conducted by the Department of Forests and Wildlife Preservation with the support from the WWF-India.

R.K. Mishra, Chief Wildlife  Warden, said a promising trend of waterbirds and species diversity has been observed from the wetlands of Pathankot and Gurdaspur district as the marshlands are full of water due to good rains and good flow into the Ravi river.

“Flocks of northern lapwings numbering up to 191 were observed in Gurdaspur wetlands which are higher in comparison to the previous three years’ average of 105. Similarly, 655 common cranes were recorded this year which is higher in comparison to the previous three years’ average of 555,” he told The Hindu.

Pointing out that 91 species of waterbirds were recorded from the six protected wetlands during the waterbird species richness survey, Gitanjali Kanwar, Coordinator — Rivers, Wetlands and Water Policy, WWF–India said: “The waterbird count was highest in the Harike Wildlife Sanctuary followed by the Keshopur–Miani Community Reserve, Ropar Conservation Reserve and Nangal Wildlife Sanctuary.

“Like previous years, the Harike Wildlife Sanctuary hosted the largest congregation and diversity of waterbirds whereas wetlands like Keshopur–Miani and Shallpattan are the only wetlands in Punjab to host the migratory population of common crane and resident population of the Sarus crane. The Ropar and Nangal wetlands host the three migratory water species of the family Podicipedidae i.e., black-necked Grebe, Horned Grebe and Greater Crested Grebe along with the resident Little Grebe.

“The year 2022 has been very difficult and challenging in relation to conducting the waterbird census exercise in wetlands of Punjab. The peak migratory bird season overlapped with the outbreak of the third wave of COVID-19 pandemic. Also, an unusually severe cold wave engulfed northern India in January and early February 2022 making an unconducive situation because of intermittent rains and dense fog for assessing the wetlands and conducting the field survey for waterbirds.

“However, before the onset of the reverse migration of the waterbirds, it was decided to conduct a waterbird ‘species richness’ survey in February 2022.”

She said, “The species of high conservation significance recorded during the survey include Bonelli’s Eagle, Greater Spotted Eagle, Northern Lapwing, Peregrine Falcon, Steppe Eagle, Western Black-tailed Godwit, Black-headed Ibis, Sarus Crane, Painted Stork, Woolly-necked Stork, Common Pochard, Common Crane, Ferruginous Pochard, Pallid Harrier, River Tern, Indian Spotted Eagle, River Lapwing, Oriental Darter, and Eurasian Curlew,”

Ms. Kanwar said the Eurasian Coot was one of the most common waterbirds spotted in almost all protected wetlands of Punjab during the survey.

“The Eurasian Coot also forms one of the highest densities among all the waterbirds recorded from Nangal, Ropar, Harike, Keshopur–Miani and Kanjli wetland followed by Gadwall and Common Teal,” she said.



Read in source website

Since transmission begins before symptoms set in and the disease becomes severe, its characteristic is decoupled from disease

In early February, World Health Organization technical lead on Covid-19, Dr Maria Van Kerkhove, cautioned that the pandemic is far from over and new variants will emerge and such variants could be more transmissible than the Omicron BA.2 variant. “The next variant of concern will be more fit, and what we mean by that is it will be more transmissible because it will have to overtake what is currently circulating. The big question is whether or not future variants will be more or less severe,” Dr Van Kerkhove said.

Evading antibodies

The only way the next variant can become even more transmissible than the Omicron variant is by exhibiting a far higher ability to evade neutralising antibodies. This would mean that full vaccinations (two doses) will be even less effective in preventing breakthrough infections. But so far, fully vaccinated people have been found to be less likely to suffer from severe disease requiring hospitalisation and even death. That is because it is the T cells and B cells that come into play to reduce the severity of the disease. “The memory T cells are extremely unlikely to prevent SARS-CoV-2 infections. That is just not what T cells generally do. They may reduce COVID-19 disease severity and prevent deaths,” Dr. Shane Crotty from La Jolla Institute for Immunology, La Jolla, California, had earlier told The Hindu.

“The variants are a wild card. We still don’t know everything about this virus, we still don’t know everything about the variants and the future trajectory of that,” Dr. Van Kherkhove added.

Virulence unpredictable

While the next variant has to necessarily be more infectious than the Omicron variant, whether the variant will be more or less severe cannot be said with certainty. But it is important to remember that right from the very early stage of the pandemic, it became clear that transmission or virus spread begins even before symptoms can show up. That is what makes SARS-CoV-2 very different from the 2002 SARS virus and MERS virus. Since transmission begins even before symptoms set in and well before the disease becomes severe, the transmission characteristic is decoupled from disease. As a result, the natural evolution process selects variants not based on how they cause disease but how they can escape neutralising antibodies.

“Almost all [SARS-CoV-2] transmission happens while people have no or few symptoms, there is no particular reason for severity to play a role in evolutionary selection. NERVTAG [The New and Emerging Respiratory Virus Threats Advisory Group] thinks Omicron's mildness is likely pure chance and the next one is likely to be more severe again,” Dr William P. Hanage from Harvard T.H. Chan School of Public Health, Boston, tweeted.

Immune escape

The virus was novel and none in the world had any immunity in the beginning of the pandemic. But with millions being infected by the virus and millions being fully vaccinated, and some with a combination of natural infection and vaccination, the next variant has to necessarily exhibit higher immune escape to cause infection. This is the reason that the next variant will exhibit more immune escape than the Omicron variant.

Even though the Omicron variant caused a large number of infections in virus-naïve people and in those who have been previously infected and vaccinated, at the population level, disease severity has been far less severe compared with the Delta variant. But lower disease severity was seen more in people who have pre-existing immunity either from vaccination or previous infection.

Two studies that tried to document the intrinsic disease severity of the Omicron variant compared it with the Delta variant. The studies found that the Omicron variant is about 75% as likely to cause severe disease or death as the delta variant. In a study posted as a preprint in medRxiv on January 12, this year, the authors conclude: “In the Omicron-driven wave, severe COVID-19 outcomes were reduced mostly due to protection conferred by prior infection and/or vaccination, but intrinsically reduced virulence may account for an approximately 25% reduced risk of severe hospitalization or death compared to Delta.”

Intrinsic severity

In the second study, a report by the Imperial College COVID-19 response team found 69% reduction in hospitalisation risk in people who have been reinfected compared with primary cases.

“This meaningful but fairly small difference implies that Omicron, Alpha, and wild-type SARS-CoV-2 have similar intrinsic severity,” Dr Roby P. Bhattacharyya from Massachusetts General Hospital, Boston and Dr William P. Hanage from Harvard T.H. Chan School of Public Health, Boston write in The New England Journal of Medicine.

“Viruses don’t inevitably evolve toward being less virulent; evolution simply selects those that excel at multiplying. In the case of COVID-19, in which the vast majority of transmission occurs before disease becomes severe, reduced severity may not be directly selected for at all,” Dr. Bhattacharyya and Dr. Hanage write. “Indeed, previous SARS-CoV-2 variants with enhanced transmissibility (e.g., Alpha and Delta) appear to have greater intrinsic severity than their immediate ancestors or the previously dominant variant.”

“It is also not true that variants are becoming milder. Delta was more severe than Alpha which was more severe than the original [virus]. Omicron is milder than Delta but likely not milder than original [virus]... and it’s not part of a steady progression to mildness.,” Dr. Hanage tweeted.

Separate lineages

Just like how transmission is decoupled from disease severity for the SARS-CoV-2 virus, it is also true that the new variants have not evolved from the existing ones. “Thus far, new variants of concern have not evolved from the dominant preceding one. Instead, they have emerged from separate lineages,” says a report in Nature. Dr. William Hanage, too, says the same in a tweet: “[SARS-CoV-2] evolves rapidly, but this isn't straightforward. None of the main variants evolved from each other. Instead, so far they are all distinct, becoming gradually fitter via subvariants until replaced by an entirely new variant.”



Read in source website

Screening for Fusobacterium in a population, in habitual tobacco chewers, could be a worthy exercise

Since the beginning of the 20th Century, it is known that infections could play a role in cancer, with 18-20% of cancers associated with infectious agents. This could be relatively higher in developing countries like India. Our team at ACTREC-Tata Memorial Centre developed a highly sensitive and specific automated computational tool HPVDetector to quantify the presence of human papillomavirus (HPV). This was done by subtracting human sequences from the cancer genome and comparing the rest with the HPV genome to identify the presence of HPV sequence trace and determine the range of all co-infecting HPV strains in the same individual.

The analysis revealed significant occurrence of HPV 16, 18, and 31, among others, in cervical cancer. But a surprising finding was that Indian patients with oral tumours showing a distinct tobacco usage gene signature were devoid of HPV infection. This was in sharp contrast to the oral tumours among Caucasian patients, wherein tobacco genetic signature is not common but is marked by a significant presence of HPV. Several groups have corroborated this finding, and it is well established that oral tumours among Indian patients are not driven by HPV infection.

In this study published on Mar 4, in NAR Cancer, Sanket Desai, the lead researcher from the group, developed another advanced automated computational tool — Infectious Pathogen Detector (IPD). Beyond HPV, IPD can detect the presence of 1,058 pathogens in the human cancer genome from datasets generated from any Next Generation Sequencing platform. This tool is publicly available for download from the ACTREC- TMC website. Using IPD, the DNA and RNA sequence from 1,407 cancer samples of oral, breast, cervical, gall bladder, lung and colorectal tumours derived from Indians were analysed and compared with Caucasian patients.

Map of microbes

This has led to establishing the most detailed map of the abundance of 1,058 microbes present across Indian cancer patients. Rigorous statistical measures were adopted to distinguish the commensal microbes present as normal flora in a healthy individual compared with the diseased state. Systematic analysis of the data helped the group identify the presence of a bacteria, Fusobacterium nucleatum, in the oral tumours at a significantly higher burden than in the oral cavity of healthy individuals.

Interestingly, Fusobacterium nucleatum is known to play a vital role in colorectal cancer, wherein its presence affects the spread of the disease and the patient's response to chemotherapy. However, a similar role of Fusobacterium in oral cancer was not known earlier. The presence of the bacteria was found in Indian and Caucasian oral cancer patients, with a much higher incidence among the Indian patients. Moreover, oral cancer patients positive for Fusobacterium were found to be negative for HPV infection, suggesting they are present in a mutually exclusive way.

The finding underlines that while oral tumours in the West are more likely to be driven by HPV infection with a lower abundance of Fusobacterium infection, the oral cancer incidences in India are caused more by Fusobacterium infection. The tumours in oral cancer patients infected with the bacterium were found to spread to lymph nodes in the head and neck region or other distant organs. This sub-class of the tumour was also found to have higher levels of genes responsible for inflammation and pro-cancer immunological response.

Consistent with this finding, infection with virus or bacteria causing chronic inflammation leading to cancer has been known across multiple cancer types, such as HPV in cervical cancer, HBV and HCV in liver cancer, H. pylori in gastric cancer, etc. This study also identified three novel small non-coding miRNA molecules among tumours infected with the bacteria. The discovery of these miRNAs allows the investigators to understand the biological pathway targeted by the Fusobacteria, when it infects the oral cells, and its detailed characterisation. The study continues in collaboration with IIT Bombay, where the researchers grow the oral cancer cells in the presence and absence of the bacterium.

Preventing cancer through immunisation against infectious agents such as HPV vaccination has been known to be effective in up to 90% of HPV-related cancers. Similarly, a significant reduction was observed in the incidence of gastric cancer across multiple studies when the patients infected with the bacteria, Helicobacterium pylori, were treated with antibiotics specific to the bacterium. The findings from the study carried out at ACTREC- Tata Memorial Centre opens an opportunity to treat oral cancer patients positive for Fusobacterium, occurring predominantly among Indian patients, with a Fusobacterium-specific antibiotic for selectively targeting the tumours. The study emphasises the impact of Fusobacterium infection on modulating conventional chemotherapy treatment or recurrence of the disease as frequently observed in oral cancer patients, similar to its role in colorectal cancer. The utility of community screening for the presence of Fusobacterium in the oral cavity in a population or among habitual tobacco chewers remains to be explored — though it could be a worthy exercise considering the alarming increase in tobacco-associated oral cancer in India.

(Amit Dutt Heads the Integrated Cancer Genomics Laboratory at the Advanced Centre for Treatment, Research and Education in Cancer (ACTREC), Tata Memorial Centre, Navi Mumbai)



Read in source website

The science on heavy drinking and the brain is clear. The two don’t have a healthy relationship. People who drink heavily have alterations in brain structure and size that are associated with cognitive impairments.

But according to a new study, alcohol consumption even at levels most would consider modest — a few beers or glasses of wine a week — may also carry risks to the brain. An analysis of data from more than 36,000 adults, led by a team from the University of Pennsylvania, found that light-to-moderate alcohol consumption was associated with reductions in overall brain volume.

The link grew stronger the greater the level of alcohol consumption, the researchers showed. As an example, in 50-year-olds, as average drinking among individuals increases from one alcohol unit (about half a beer) a day to two units (a pint of beer or a glass of wine) there are associated changes in the brain equivalent to ageing two years. Going from two to three alcohol units at the same age was like aging three and a half years. The team reported their findings in the journal Nature Communications.

Going from zero to one alcohol units didn’t make much of a difference in brain volume, but going from one to two or two to three units a day was associated with reductions in both grey and white matter, according to University of Pennsylvania release.

Ample research has examined the link between drinking and brain health, with ambiguous results. While strong evidence exists that heavy drinking causes changes in brain structure, including strong reductions in grey and white matter across the brain, other studies have suggested that moderate levels of alcohol consumption may not have an impact, or even that light drinking could benefit the brain in older adults.



Read in source website

Prof. Dhar is the first Indian to receive this top honour in the field of statistical physics

Deepak Dhar, physicist, from the Indian Institute of Science Education and Research, Pune, has been selected for the Boltzmann medal, awarded by the Commission on Statistical Physics (C3) of the International Union of Pure and Applied Physics. He becomes the first Indian to win this award, which was initiated in 1975, with Nobel laureate (1982) K.G. Wilson being the first recipient. He shares the platform with American scientist John J. Hopfield who is known for his invention of an associative neural network, now named after him. The award consists of the gilded Boltzmann medal with the inscription of Ludwig Boltzmann, and the chosen two scientists will be presented the medals at the StatPhys28 conference to be held in Tokyo, 7-11 August, 2023.

Prof. Dhar, who was formerly at the Tata Institute of Fundamental Research, Mumbai, has been chosen for this award for his seminal contributions in the field of statistical physics, including exact solutions of self-organised criticality models, interfacial growth, universal long-time relaxation in disordered magnetic systems, exact solutions in percolation and cluster counting problems and definition of spectral dimension of fractals, according to the website of the C3 Commission.

It is noteworthy that the last item, namely, the definition of the spectral dimension of fractals, relates to the work he did as a PhD student at California Institute of Technology, in the U.S. The award, in effect, marks out his lifetime’s achievement.

A magician

Prof Gautam Menon, who has worked with Prof. Dhar says on Twitter, “He was my post-doctoral mentor at TIFR. I can think of no-one who deserves this more.” He goes on to say that Mark Kac, who pioneered the development of mathematical probability and its application to statistical physics, once said, “There are two kinds of geniuses, the ‘ordinary’ and the ‘magicians’. An ordinary genius is the fellow that you and I would be just as good as, if we were only many times better… once we understand what they have done, we feel that we too could have done it. It is different with the magicians. Even after we understand what they have done, the process by which they have done it is completely mysterious.” Prof Menon says, then, “Deepak is a magician.”

The Statistical Physics community celebrated the occasion of Prof Dhar and Prof. Mustansir Barma, who is at TIFR, Hyderabad centre now, turning 70, recently. The members of this group number more than a hundred today, and the contribution of Prof. Dhar and Prof. Barma in building up this community by nurturing the talents with schools and mentorship, has been appreciated and noted by physicists. However, both scientists independently said that they had not done this consciously and were happy at the outcome.

Previous winners

The medal, which honours outstanding achievements in the field of statistical physics, has been given to one or two persons, once in three years, in the last 47 years. Previous winners include K.G. Wilson, R. Kubo, M.E. Fisher, R.J. Baxter, Kurt Binder, Giorgio Parisi, L. P. Kadanoff and other such names which to be found in the textbooks of statistical physics and many of whom have later won the Nobel prize. It is given only once to a person and on the condition that that person has not won the Nobel prize so far.



Read in source website

Cape Canaveral The moon is about to get walloped by 3 tons of space junk, a punch that will carve out a crater that could fit several semitractor-trailers.

The leftover rocket will smash into the far side of the moon at 5,800 mph (9,300 kph) on March 4, away from telescopes’ prying eyes.

It may take weeks, even months, to confirm the impact through satellite images.

It’s been tumbling haphazardly through space, experts believe, since China launched it nearly a decade ago.But Chinese officials are dubious it’s theirs.

No matter whose it is, scientists expect the object to carve out a hole 33 feet to 66 feet (10 to 20 metres) across and send moon dust flying hundreds of miles (kilometres) across the barren, pockmarked surface.

Low-orbiting space junk is relatively easy to track.

Objects launching deeper into space are unlikely to hit anything and these far-flung pieces are usually soon forgotten, except by a handful of observers who enjoy playing celestial detective on the side.

SpaceX originally took the rap for the upcoming lunar litter after asteroid tracker Bill Gray identified the collision course in January.He corrected himself a month later, saying the “mystery” object was not a SpaceX Falcon rocket upper stage from the 2015 launch of a deep space climate observatory for NASA.

Mr. Gray said it was likely the third stage of a Chinese rocket that sent a test sample capsule to the moon and back in 2014. But Chinese ministry officials said the upper stage had reentered Earth’s atmosphere and burned up.

But there were two Chinese missions with similar designations — the test flight and 2020’s lunar sample return mission — and U.S. observers believe the two are getting mixed up.

The U.S. Space Command, which tracks lower space junk, confirmed on Tuesday that the Chinese upper stage from the 2014 lunar mission never deorbited, as previously indicated in its database. But it could not confirm the country of origin for the object about to strike the moon.

“We focus on objects closer to the Earth,” a spokesperson said in a statement.

Mr. Gray, a mathematician and physicist, said he’s confident now that it’s China’s rocket.

“I’ve become a little bit more cautious of such matters,” he said. “But I really just don’t see any way it could be anything else.”

Jonathan McDowell of the Harvard and Smithsonian Center for Astrophysics supports Mr. Gray’s revised assessment, but notes, “The effect will be the same. It’ll leave yet another small crater on the moon.”

The moon already bears countless craters, ranging up to 1,600 miles (2,500 kilometres). With little to no real atmosphere, the moon is defenceless against the constant barrage of meteors and asteroids, and the occasional incoming spacecraft, including a few intentionally crashed for science’s sake. With no weather, there’s no erosion and so impact craters last forever.

China has a lunar lander on the moon’s far side, but it will be too far away to detect Friday’s impact just north of the equator. NASA’s Lunar Reconnaissance Orbiter will also be out of range. It’s unlikely India’s moon-orbiting Chandrayaan-2 will be passing by then, either.

“I had been hoping for something (significant) to hit the moon for a long time. Ideally, it would have hit on the near side of the moon at some point where we could actually see it,” Mr. Gray said.

After initially pinning the upcoming strike on Elon Musk’s SpaceX, Mr. Gray took another look after an engineer at NASA’s Jet Propulsion Laboratory questioned his claim.

Now, he’s “pretty thoroughly persuaded” it’s a Chinese rocket part, based not only on orbital tracking back to its 2014 liftoff, but also data received from its short-lived ham radio experiment.

JPL’s Center for Near Earth Object Studies endorses Mr. Gray’s reassessment. A University of Arizona team also recently identified the Chinese Long March rocket segment from the light reflected off its paint, during telescope observations of the careening cylinder.

It’s about 40 feet (12 metres) long and 10 feet (3 metres) in diameter, and doing a somersault every two to three minutes.Mr. Gray said SpaceX never contacted him to challenge his original claim. Neither have the Chinese.

“It’s not a SpaceX problem, nor is it a China problem. Nobody is particularly careful about what they do with junk at this sort of orbit,” Mr. Gray said.

Tracking deep space mission leftovers like this is hard, according to Mr. McDowell. The moon’s gravity can alter an object’s path during flybys, creating uncertainty. And there’s no readily available database, Mr. McDowell noted, aside from the ones “cobbled together” by himself, Mr. Gray and a couple others.

“We are now in an era where many countries and private companies are putting stuff in deep space, so it’s time to start to keep track of it,” Mr. McDowell said. “Right now there’s no one, just a few fans in their spare time.”



Read in source website

There is absolutely no way to forecast the timing, says Professor behind SUTRA model

Independent experts have criticised a recent modelling study from a group of researchers at the Indian Institute of Technology, Kanpur that predicts a fourth COVID wave in India around June.

The study, uploaded on the preprint server, Medrxiv, which hosts scientific work that is yet to be published in a peer-reviewed journal forecasts the wave to begin precisely on June 22, 2022, reaching its peak on August 23, 2022 and ending on October 24, 2022. For its analysis, it takes the trajectory of the coronavirus epidemic in Zimbabwe, because its history most resembles the case trend in India, and concludes that because Zimbabwe has seen a fourth wave, India is a fait accompli.

The authors, Sabara Parshad Rajeshbhai, Subhra Sankar Dhar and Shalabh, of the Department of Statistics and Mathematics, IIT-K, add caveats that the future wave could be affected by the nature of the existing variant that would emerge as well as vaccination coverage.

The Hindu could not immediately reach out to the authors for comment.

“The findings from this study are aimed to help and sensitise the people. For example, a few countries including the Government of India have started to provide booster dose to a section of people, which may reduce the impact of the fourth wave in a long run,” their paper notes.

A group at IIT Kanpur, led by Manindra Agrawal, a professor of mathematics and computer science, is behind the SUTRA model, whose forecasts on the pandemic are widely followed. While this model failed to forecast the deadly second wave and was critiqued by epidemiologists and biologists for its approach, it was accurate at gauging the trajectory of the third wave. The latest study is however independent of the SUTRA model.

Mr. Agrawal told The Hindu that he disagreed with the underlying assumptions of the latest study. The timing of a hypothetical fourth wave, he said, could not be predicted because it was heavily dependent on the nature of a future variant. “There is absolutely no way to predict the timing. If at all we see one, it will be very short and would have to be caused by a highly infectious variant because you have to account for the fact that nearly 90% of India has been exposed to the virus,” he said. The SUTRA model does not yet see a fourth wave, he added.

Gautam Menon, of the Ashoka University and who has been closely involved with efforts to mathematically model the pandemic argued in an explanatory Twitter thread that the forecast of a fourth wave “shouldn’t be taken seriously” because epidemiology wasn’t an exact science like physics or chemistry. The pandemic waves were being driven by variants, none of which could be predicted in advance, and modelling could at best be useful for “broad policy rather than a highly specific prediction of numbers.”

Zimbabwe’s median age was 19 as opposed to India’s 30 and had a vaccination coverage of 40% of the population to India’s 75%. These meant different degrees of “hybrid immunity” (that is protection from future infections due to a combination of vaccines and previous exposure) and the current study did not account for this rendering their predictions unusable, he opined. “Should one trust this model at all? The answer, simply, is no.”



Read in source website

Forgotten for centuries, Balkanatolia is a speculative, ‘third’ Eurasian continent, wedged between Europe, Africa and Asia. The continent likely came into existence 50 million years ago and lost its independent identity, because of a major glaciation event 34 million years ago, that led to the formation of the Antarctic ice sheet and lowering sea levels, connecting Balkanatolia to Western Europe.

Why such a continent ought to have existed results from palaeontological evidence from centuries that posed a puzzle. During the Eocene Epoch (55 to 34 million years ago), Western Europe and Eastern Asia formed two distinct land masses with very different mammalian faunas: European forests were home to endemic fauna such as Palaeotheres (an extinct group distantly related to present-day horses, but more like today’s tapirs), whereas Asia had a more diverse fauna including the mammal families found today on both continents.

Western Europe was known to be colonised by Asian species around 34 million years ago, leading to a major renewal of vertebrate fauna and the extinction of its endemic mammals, a sudden event called the ‘Grande Coupure’. Surprisingly, fossils found in the Balkans point to the presence of Asian mammals in southern Europe long before the Grande Coupure. This was best explained by the existence of a landmass, prior to the Eocene, that was disconnected and a continent of its own.

If this newsletter was forwarded to you, you can subscribe to get it directly here

From the Science Page

Question Corner

What is the mechanism that enables fish to swim? Read the answer here.

Of Flora and Fauna



Read in source website

A recent study constructs a genealogy of all humans from 27 million fragments of ancestral genomes

Andrew Wilder Wohns et al, “A unified genealogy of modern and ancient genomes”, Science, Vol. 375, 2022. DOI: 10.1126/science.abi8264

Building family trees is very popular with many people. But such an attempt is likely to take the process back to just a few generations at the most. In some cases, families have been traced back to hundreds of years. Now, we are to see that science has enabled us to take this process back thousands of years, and we can have a family tree of the entire human race, according to a study published in the Science journal. These family trees are constructed not by identifying individuals that one has descended from, but by taking from the human genomic databases, details of each chromosome (autosome) and constructing a “tree” that relates it to the parent from whom that particular chromosome was inherited. New work by researchers from the University of Oxford’s Big Data Institute builds up the family tree of the entire human race using this new methodology. This is the largest family tree ever, built with about 27 million ancestral haplotype fragments (ancestral DNA). The research has been published in the February 25 issue of Science, this year (“A unified genealogy of modern and ancient genomes”, Andrew Wilder Wohns et al, Science, Vol. 375, 2022).

Inferring historical data

Among existing methods to build up a genealogy of humans, or any other organism whose genome data exists for that matter, the one involving “trees” appears to be more accurate. We already have such trees that have combined thousands of genomes and traced out demographic data. According to this study, ancient genomes can be integrated into these trees to give data from which one can infer historical details of demography. The present study, the latest in this line, takes into account more than 3,500 high-quality and ancient genomes from more than 215 different human populations. In addition, the researchers used, more than 3,000 ancient genomes to improve inferences from the trees. A “Perspective” article in the same issue of Science (written by Jasmin Rees and Aida Andres) gives an easy-to-read gist of the paper.

Combining datasets that have been built up independently poses a huge challenge. The authors refer to discrepancies between cohorts due to errors, differing sequencing techniques, and the processing of variants, which can lead to a lot of “noise” that can drown out the “signal.” However, they do a smart building up of this tree, the largest one built up so far, and reproduce several well-known demographic results that have been given by previous methods such as using mitochondrial DNA and Y chromosome analyses. They reiterate the out-of-Africa emergence result, for one thing.

The paper proposes a new method to determine the ancient geographical location of evolutionary events. They take the mid-points of the geographical locations of individuals featured in the tree and postulate that the location of the parents is at the mid-point of these. Thereafter, by repeating this procedure, they arrive at the location of the theoretical earliest common ancestor. Wohns et al write that despite the fact that the geographical centre of the sampled individuals is in Central Asia, when you go back 72,000 years into the past, this geographical centre shifts to Northeast Africa “and remains there until the oldest common ancestors are reached.” They further state, “the geographic centre of the 100 oldest ancestral haplotypes (which have an average age of approximately 2 million years) is located in Sudan.” While these results are supported by the oldest fossil findings, the authors caution that if the data had consisted of a grid sampling of genomes from Africa, for instance, the centre of gravity, which is now Sudan, could shift.

Limitations of the study

The “Perspective” article also outlines some of the limitations of the tree method, namely, that there is an uncertainty in evolutionary parameters and errors in reading ancient genomes, but also point out that with more and better-quality data coming in, the tree method is likely to provide good answers; as larger and more data from under-represented populations becomes available, the answers would become more accurate.

While this particular study has focussed on human genealogy, the authors point out that this can be used to trace out the ancestry of any species for which genomic data exists.

It is important to appreciate that such an analysis would not have been possible in the twentieth century, when the human genome had not been sequenced, cutting edge techniques for isolating, identifying and sequencing genomes of archaic sources had not been developed, sophisticated computer simulation methods were not available and such perspectives had not been established. So truly, the building of this huge genealogy is a feat of twenty-first century science, and it brings together evolutionary biologists, repositories of information, simulation experts and high-power computing and data science to yield hitherto unimagined results.

THE GIST
A new study by researchers from Oxford has enabled us to see a family tree of the entire human race. These family trees are constructed by taking from the human genomic databases, details of each chromosome (autosome) and constructing a “tree” that relates it to the parent from whom that particular chromosome was inherited.
According to this study, ancient genomes can be integrated into these trees to give data from which one can infer historical details of demography. It takes into account more than 3,500 high-quality and ancient genomes from more than 215 different human populations.
Without cutting edge techniques for isolating, identifying and sequencing genomes of archaic sources and sophisticated computer simulation methods of 21st century science, such a feat would have been impossible.


Read in source website

The war in Ukraine threatens space development — and not only due to sanctions on Russia. Ukraine plays a little-known, but crucial, role in the world's space scene.

Written by Esteban Pardo

Many space programs and rockets likely wouldn’t exist without Ukraine’s space industry.

Ukraine has been a major player in the world’s space industry since the 1950s. Today it is a top designer and manufacturer of space launch vehicles, rocket engines, spacecraft and electronic components.

One of Ukraine’s leading space manufacturers is the state-owned company Yuzhmash, which works closely with Yuzhnoye, a Ukraine-based designer of satellites and rockets. Both companies were founded in the 1950s and answer to the State Space Agency of Ukraine (SSAU).

Key role in the world’s space scene

The European Space Agency’s (ESA) successful rocket family Vega, which recently celebrated its 10th anniversary, has a Ukrainian made rocket engine in its upper stage — the part that detaches from the rocket and then places the payload into the desired orbit.

The Vega launch vehicle is used to launch small payloads, and a newer version, the Vega-C, is currently under development and expected to debut later this year.

Another important rocket family designed by Yuzhnoye is the Zenit, which aimed to replace the outdated, Soviet-era Tsyklon and Soyuz rocket families. After 71 successful launches, the last flight of the Zenit rocket family took off in December 2017.

Since its first flight in the 1960s, the Soyuz family has been the most-used launch vehicle in the world. After the end of the 2011 Space Shuttle and until SpaceX’s Falcon 9 mission in 2020, Soyuz rockets were the only approved launch vehicle for sending astronauts to the International Space Station (ISS).

The ISS constantly needs new deliveries of supplies. For that, they use different spacecraft like SpaceX’s Dragon, the Russian Progress or the Cygnus, which is carried by an Antares launch vehicle jointly developed by the US company Northrop Grumman and Ukraine’s Yuzhnoye.

Additionally, parts of the rocket engine technology currently being developed by Rocket Factory Ausburg, a German start-up trying to build the cheapest rocket in the world, come from Ukraine’s Yuzhmash, according to Golem.

Ukraine’s ‘Rocket City’

Yuzhnoye and Yuzhmash are both headquartered in the southeastern Ukrainian city of Dnipro, dubbed “Rocket City” after its space industry.

So far, no official attacks have been reported in the city, but Reuters reported an eyewitness video of an alleged explosion near Dnipro on Feb. 24.

Two days later, Euronews reported that masses of men and women from Dnipro were volunteering to join the fight. This was further confirmed by Al Jazeera Witnesses, which reported “people collecting food, water, clothing and even making Molotov cocktails to throw at tanks.”

During the Soviet era, Dnipro was one of the main centers for space, nuclear and military industries and played a crucial role in the development and manufacture of ballistic missiles for the USSR.

One of the most powerful intercontinental ballistic missiles (ICBM) used during the Cold War was the R-36, which later became the base of the Tsyklon launch vehicle families. Both the R-36 and the Tsyklon were designed by Yuzhnoye and manufactured by Yuzhmash.

Dnipro’s famous aerospace industry has also attracted foreign companies like Texas-based Firefly Aerospace. The company was purchased in 2017 by Max Polyakov, who opened a Firefly Aerospace research and development center in Dnipro the following year.

Ukraine’s space program also involves projects such as space debris removal missions and anti-asteroid protection systems.

It has successfully launched many satellites for communication, imaging and scientific purposes into orbit and is in the process of developing a new space launch vehicle, the Cyclone-4M, based on the Zenit and the Tsyklon.

Edited by: Clare Roth



Read in source website

Similar structures have been discovered on Pahrump Hills on Mars were found to be made out of Sulfates.

NASA’s Curiosity Rover has created a picture of a mineral formation that is shaped like a flower. The formation resembles a coral or sea anemone in the picture, but it is just a lifeless structure. According to Space.com, the flower-like rock has been named Blackthorn Salt and it is a diagenetic feature, which means that it is made from mineral deposits left behind by an ancient water body.

Images of the structure were merged on February 25, after they were captured near Aeolis Mons (Mount Sharp), which is a martian mountain that forms the central peak within the Gale crater in March. The Curiosity Rover was designed to explore this crater and has been doing so since it landed on Mars in August 2012.

The image was created by merging between two and eight images previously taken by the Mars Hand Lens Imager (MAHLI) which is located on a turret at the end of the rover’s robotic arm. Focus merging was used to merge the multiple images in such a way as to ensure that as many of the features into focus.

Abigail Fraeman, a Deputy Scientist for the Curiosity Mars Rover project tweeted the same picture with a United States penny juxtaposed on it approximately to scale to help people understand the actual size of the structure. The Lincoln Penny photoshopped onto the image by the scientist is actually from an image of a penny that is part of a camera calibration target on the Curiosity Rover.

(1/3) Your Friday moment of zen: A beautiful new microscopic image from @MarsCuriosity shows teeny, tiny delicate structures that formed by mineral precipitating from water.

(Penny approximately for scale added me)https://t.co/cs7t11BWAj pic.twitter.com/AU20LjY5pQ

— Abigail Fraeman (@abbyfrae) February 26, 2022

According to Fraeman, similar structures have been discovered on Mars in the past, most notably at Pahrump Hills, an outcrop at the base of Aeolis Mons. Over there, the features were made of salts called Sulfates.



Read in source website

The new model suppresses topic-specific biases by being based entirely on emotional states while learning nothing about the topic described in posts.

Researchers at Dartmouth College have developed an artificial intelligence (AI) model that can be used to predict mental disorders using data from conversations on Reddit, according to an article by the university.
Researchers Xiaobo Guo, Yaojia Sun and Soroush Vosoughi presented a paper titled, “Emotion-based Modeling of Mental Disorders on Social Media” at the 20th International Conference on Web Intelligence and Intelligent Agent Technology.

According to the paper, most such AI models that exist currently function on the basis of the psycho-linguistic analysis of the content of the user-generated text. Despite displaying high levels of performance, content-based representation models are affected by domain and topic bias.

Vosoughi explained to a Dartmouth science writer by speaking about the possibility of how if a model learns to correlate the word “COVID” with “sadness” or “anxiety”, it will automatically assume that a scientist doing COVID research and posting about it is suffering from depression and anxiety.

The new model suppresses these topic-specific biases by being based entirely on emotional states while learning nothing about the topic described in posts.

To train the model, researchers collected two sets of data from between 2011 and 2019: the first one was a dataset of users with one of three emotion disorders of interest (major depressive, anxiety and bipolar disorders) and the second was a dataset of users without known mental disorders, which acted as a control group.

The first dataset was collected based on self-reported mental disorders i.e, the researchers searched for users who had made posts or comments which said something similar to “I was diagnosed with bipolar/depression/anxiety”. Only posts made before the self-report were considered for the research because prior work had shown that users’ realisation that they have a disorder will change how they behave online and create a bias.

Researchers then ensured that the data belonging to the four classes (one each for users with each disorder of interest and one control group) had similar temporal distributions: this means that the data in the four classes had a similar time-based distribution of posts. The datasets were also balanced with 1,997 users for each of the classes.

After this, the researchers split the data into training (70%), validation (15%) and test (15%). After training the model on the data and then testing it, researchers found out that the emotion-based representation model that they used was more accurate in predicting disorders than the content TF-IDF based (Term Frequency — Inverse Document Frequency) method. TF-IDF is used to compute the importance of a keyword, based on its frequency and the importance of the post.



Read in source website

The researchers demonstrated the effectiveness of side-channel attacks on commercial microprocessors in an earlier paper before developing the new chip.

Saurav Maji and Utsav Bannerjee, two Indian researchers working at the Massachusetts Institute of Technology (MIT), have built a low-energy security chip that is designed to prevent side-channel attacks (SCAs) on IoT (Internet of Things) devices. SCAs take advantage of security exploits where information can be gathered from the indirect effects of the functioning of the system hardware rather than attacking a programme or software directly.

“Traditionally, SCAs have been used in cryptography. If some data is being processed and there is a secret key used to encrypt or decrypt it, SCAs could be used in some cases to recover this key. It can be applied to any data that you want to keep secret. For example, it can be used on your smartwatch to extract your ECG and heart rate signal,” Maji, a graduate student at MIT and lead author of the paper, told indianexpress.com.

Side-channel attacks and their increasing viability

Typically, these attacks aim to extract sensitive information like cryptographic keys, proprietary machine learning models and parameters by measuring things like timing information, power consumption and electromagnetic leaks of a system.

In order to illustrate, let’s imagine that you want to find out whether your neighbour has been watering their garden. Using traditional attack methods, you would try to keep track of your neighbour to see if and when they are watering the plants in their garden.

But if you were to use the logic of an SCA, you would determine the same by measuring other auxiliary information like whether their plants are doing well, the amount of water they consume in the household, and whether they have the garden hose out. Here, you are using the information from the execution of an act to determine what is happening rather than looking at the act itself.

Even though SCAs are difficult to execute on most modern systems, the increasing sophistication of machine learning algorithms, greater computing power of devices and measuring devices with increasing sensitivities are making SCAs more of a reality.
Saurav Maji (L) is the lead author of the paper and Utsav Banerjee (R) is a co-author. Maji is a grad student pursuing a PhD at MIT while Banerjee is an MIT graduate who is an assistant professor at IISc currently. (Image credit: Researchgate, Github)

Before developing the new security chip, Maji and Bannerjee, had published an attack paper titled “Leaky Nets: Recovering Embedded Neural Network Models and Inputs through Simple Power and Timing Side-Channels — Attacks and Defenses” in the IEEE Internet of Things Journal, under the guidance of Anantha Chandrakasan, the dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science.

In the paper, they demonstrated the efficacy of SCAs by recovering machine learning model parameters and even inputs from the functioning of a commercial embedded microprocessor, similar to the ones used in commercial IoT devices.

How the new architecture could help

Since SCAs are difficult to detect and defend against, countermeasures against them have notoriously been very computing power and energy-intensive. This is where the new chip architecture comes in.

The MIT researchers presented their design in a paper titled “A Threshold-Implementation-Based Neural-Network Accelerator Securing Model Parameters and Inputs Against Power SCAs”, published in the International Solid State Circuits Conference 2022.

While Chandrakasan is the senior author of the paper, others who worked on it include. Banerjee, an MIT graduate and now assistant professor at the Indian Institute of Science, and Sam Fuller, a visiting research scientist at MIT.

The chip built by Maji and his collaborators is smaller than the size of a thumbnail and uses much less power than traditional security measures against SCAs. It has been built to be easily incorporated into smartwatches, tablets, and a variety of other devices.

“It can be used in any sensor nodes which connects user data. For example, it can be used in monitoring sensors in the oil and gas industry, it can be used in self-driving cars, in fingerprint matching devices and many other applications,” said Maji.

The chip uses near-threshold computing, a computing method where the data to be worked on is first split into separate, unique and random components. The chip then conducts operations separately on each component in a random order before aggregating the results for a final result.

Due to this method, the information leak from the device through power-consumption measurements are random and would reveal nothing but gibberish in an SCA. However, this method is energy and computation power-intensive while also requiring more system memory to store information.

Maji and others found a way to optimise this process to reduce some of the computational overheads. The researchers claim they have reduced the required computing overheads by three orders of magnitude with their chip architecture.

But at the same time, the implementation of this chip architecture in a system would require at least a five-fold increase in energy consumption 1.6 times the silicon area of an insecure implementation. Also, the architecture only protects against energy consumption-based SCAs and doesn’t defend against electromagnetic SCAs.



Read in source website

The new study reports findings from investigating the behaviour of nearly 45 chimpanzees.

Chimpanzees have recently been observed to apply insects on their wounds and that of their conspecifics (i.e. individuals of the same community and species) for healing. The new study, conducted by researchers from Osnabruck University and the Ozouga Chimpanzee Project in the Loango National Park, Gabon, and published in Current Biology, reports findings from investigating 76 wounds in 22 different chimpanzees. The Rekambo community consists of ≈ 45 chimpanzees and was observed for their social behaviour, hunting behaviour, tool use and communication skills.

“Our two closest living relatives, chimpanzees and bonobos, for instance, swallow leaves of plants with anthelmintic properties and chew bitter leaves that have chemical properties to kill intestinal parasites,” Simone Pika, one of the authors of the study and co-director of the Ozouga Chimpanzee project, said in a press communication with indianexpress.com

This behaviour was first filmed – almost by chance – by a volunteer, Alessandra Mascaro, at the Ozouga Chimpanzee Project, who saw a mother chimpanzee catch something from underneath a bush, put it between her lips and apply it to the wound of her adolescent son. The team then decided to focus future research efforts specifically on wounded individuals.

In another incident, an adult male was seen wounded and another adult female chimpanzee was suddenly observed catching an insect, which she handed over to the wounded male who applied it onto his wound. Over the next 15 months, the team recorded 22 such instances between Nov 2019 and Feb 2021.

Self-medication has been observed across a wide spectrum of species, Mascaro et al. (2022) highlight. Wood ants, for instance, have been known to use antimicrobial resin from conifer trees in their nests. Parasite infected monarch butterflies, in a bid to protect their offspring, lay their eggs on antiparasitic milkweed. Primates have been known to chew on Vernonia amygdalina that has antiparasitic properties, but no nutrition as such.

So far, among the Great Apes, eating plant parts or non-nutritional substances has been observed. But this is the first study that reports such a behaviour, involving a skin-level (topical) application of animal matter to wounds on the skin. The insect species that the chimpanzees preferred has not been identified so far, but the study does make some observations. One, they are flying insects, as chimpanzees moved their hands quite fast in order to catch them. Two, sometimes, these insects are caught under a leaf or a branch, and are dark in colour. Three, never did the researchers observed the eating of insects.

This might have two implications. One, that insects might actually have some medical functions, for instance anti-inflammatory properties. After all, all chimpanzees practising this behaviour were wounded without exception. Indeed, some insect species do have antibiotic and antiviral properties but it still remains to be seen whether it is not merely a local practice in that particular chimpanzee community (just like some cultural treatments in human societies that have no direct medical function). At any rate, the study notes, it gives an interesting peek into the origins of human traditional medicine.

The other also important implication, according to Mascaro et al. (2022), is that it points towards the cognitive and behavioural sophistication of the species. The study highlights that this shows that ‘individuals not only treat their own wounds but also that of their other non-related members of the species.’ This clear prosocial behaviour is rarely observed in non-human societies.

The observation of prosocial behaviour – actions intended to help others, and arising out of empathic concerns in humans – raises important questions for the study of evolution. Evolution, at least in theory, claims that any individual acts in self-interest. Chimpanzees, being our closest relatives, offer a very good template to study this template in non-human primates and, ultimately, reconstruct the development of this behaviour in humans.

Other studies, such as Mitani (2009) have documented the cooperation, territorial patrolling, meat sharing as well as aggression. However, other studies, such as Silk et al. (2005) report a complete non-existence of cooperation and nothing short of an ‘indifference’ towards unrelated conspecifics. The findings of this study show, if anything, that the debate is far from settled; and that, chimpanzees will continue to surprise us with unexpected new behaviours.

The author is a research fellow at the Indian Institute of Science (IISc), Bengaluru, and a freelance science communicator. He tweets at @critvik 



Read in source website

Unlike most modern innovations in passive solar desalination technologies, this one uses a wick-free system to ensure low running costs and maintenance.

A team of researchers at the Massachusetts Institute of Technology (MIT) and Shanghai Jiao Tong University in China have come up with an inexpensive passive solar evaporation system that can be used to clean wastewater or desalinate saline water in order to provide potable water. Most modern attempts at solar desalination use some kind of wick to draw salty water through the device. But these wicks face the problem of salt accumulation, which causes the system’s efficiency to drop and requires regular and periodic maintenance, making it much more expensive and much less practical.

The new research findings have been published in a paper in the journal Nature Communications by MIT graduate student Lenan Zhang, postdoctoral associate Xiangyu Li, professor of mechanical engineering Evelyn Wang, and four others.

In order to avoid the problem of salt accumulation, the team created a wick-free system. Their system features a layered design with dark material at the top to absorb the sun’s heat, followed by a thin layer of water that sits above a perforated layer of material, which itself sits above a reservoir of salty or non-potable water like a tank or a pond.

“The recent development has been using wicking structures and novel materials to achieve high performance. But because of capture pressure, you restrict mass flow. Only the freshwater is evaporating. This leaves a lot of salt in this confined porous structure. This accumulates so much salt, that the system stops being efficient. This creates a reliability issue. We utilise natural convention to avoid using such materials,” Xiangyu Li told indianexpress.com.

After a lot of experimentation, the researchers determined the optimal size of the holes drilled through the perforated material (which was polyurethane during the experiments): 2.5 mm across. During the experiment, the holes were made using a high-pressure waterjet but Li doesn’t rule out the possibility of using other methods to create them.

As the water above the layer gets saltier due to evaporation, the small holes facilitate the exchange of salt between the water on top and the reservoir under. This happens due to the difference in density between the water with accumulated salt on top and the water under it.

“When you look at a road on a hot day, you see some wave-like things in the difference, This happens due to the fact that hot air near the surface is hotter, creating a convention flow causing the refraction with which you perceive those ‘waves’. Our device works on a similar premise, based on the different density of the water in the two layers,” explained Li.

“Unlike other designs with reliability issues, we use natural convention, relying on the geometry of the device. It was completely constructed from household materials we sourced through Amazon. By my estimates, it will cost about $4 for one square-meter device,” added Li.

Of course, this device only takes care of one part of the process: evaporation. In order to become a fully working system, it will also need a separate condenser. But Li reckons that a condenser device can be built to be equally as cost-effective as the evaporator.

With ideal conditions, a one square metre device like the one described Li should be able to yield about 6.5 litres of water.

The device isn’t without limitations though: the need for a reservoir like a tank or a pond means that it will be hard to deploy in areas that are truly arid. Instead, it is aimed at being a decentralised desalination and purification solution for families and communities who live in remote areas where geography and other factors make it difficult to access desalinated water from a centralised plant.

Also, the device is likely years away from a stage where it can be mass-produced or deployed as the researchers are still working on improving its operational efficiency and understanding what modifications need to be made based on various environmental and source factors like reservoir water quality, temperature etc.



Read in source website