Quantcast
Channel: Illinois Science Council

How Climate Change Affects Our Forests

0
0

You may remember from science class that plants take up water and carbon dioxide during photosynthesis to make glucose for food, releasing oxygen in the process. Since trees can grow to be very large and live for a long time, they are great at absorbing and storing large amounts of carbon dioxide from the atmosphere. In addition to acting as carbon sinks, our forests provide vital services. They provide humans with clean air and shade. Forests serve as crucial habitats for wildlife, including many endangered species. Their deep roots help hold soil together and reduce erosion. Many people find that simply walking through a forest improves their mental well-being. Our trees should not be taken for granted, especially since human-caused climate change is impacting forests in harmful ways.

Since the beginning of large-scale greenhouse gas emissions (around 1830), Earth’s average temperature has increased by a bit more than 1 degree Celsius. This may not seem like much, but it’s significant for our forests, which are used to gradual natural changes in temperature over thousands of years. During the last Ice Age, temperate forests of the Northern Hemisphere spread further south, reaching the Mediterranean region of Europe and Northern Africa. At the same time, regions that are deserts today, like the American Southwest, became forested. As the ice caps melted and temperatures rose, they retreated north to their modern locations. The recent warming caused by humans is accelerating that trend.

Largely due to climate change, trees are on the move. Individual trees, of course, are stationary. But the seeds they produce can get transported by animals, wind, and water to new areas, where they grow to produce new trees. If trees on one edge of a range die out, and the trees on another edge thrive and spread, then the population of trees is said to migrate. A study by the US Forest Service analyzed 30 US States over decades, comparing the location of older trees with younger growth. They found that 70% of tree species studied were found to be migrating. This movement is problematic for mixed forest communities where more than one species of tree dominates. If multiple species move in different directions, then the forest community will lose diversity.

By 2100, the forests of the Midwest could contain more southern tree species like this forest in Tennessee. Photo by Doug Bradley.

Forests in Illinois are changing, and scientists are increasingly interested in their future. Before it was developed, the Chicago was dominated by oak trees such as the white oak (the state tree of Illinois). The acorns of the white oak remain an important source of food for rodents, deer, and birds in the surrounding natural areas. The U.S. Forest Service Report characterizes this tree as moderately vulnerable to climate change—although it is at risk of being damaged by heavy winds, it is drought-tolerant and may adapt to hotter conditions. Still, overall, it is projected to decrease in number in Illinois. Black cherry is another important tree in our region. Its fruit is edible and a favorite among birds. Black cherry flowers are important to many pollinators, including the swallowtail butterfly, whose caterpillars consume the leaves. The report characterizes it as being highly vulnerable to climate change. This is due to the tree’s sensitivity to pollution and soil conditions, both of which are projected to worsen. Black cherry trees are projected to decrease in number, reducing biodiversity in both trees and the wildlife that depend on them.

Climate change is also threatening trees across the United States through forest fires. Due in part to hotter, drier summers from climate change, the incidence of forest fires is increasing– in the last three decades the area of large forest fires in the western United States has doubled. Forest fires and climate change work as a positive feedback loop. A warmer climate encourages forest fires by drying out vegetation and soil, and reduced periods of snow cover means that fires can occur during colder months. Forest fires, in turn, contribute to climate change. A burned forest not only releases the carbon once stored in the trees, but also makes it impossible for those trees to absorb new carbon. 

Aftermath of a 2015 forest fire in Arizona. Photo by US Department of Agriculture.

Climate change is also creating opportunities for the introduction of invasive species that are not native to the forest. Many invasive species that were limited in range by their inability to survive cold winters are spreading to new habitats made vulnerable by climate change. Invasive plant species take up space and resources, crowding out native trees and using up water and soil nutrients. Invasive insect species might attack specific native tree species. One invasive insect called the Emerald ash borer, which was first detected in Michigan in 2002, has killed millions of native ash trees by tunneling under their bark. It arrived in Illinois in 2006, and has inflicted extensive damage ever since. A tree census completed in 2020 found that the number of ash trees in Chicago had decreased by half since 2010. Our changing climate could lead invasive species to reach more natural habitats, diminishing our forests and native wildlife. They can also indirectly contribute to climate change by killing off large numbers of trees.

Emerald ash borers have devastated the Ash tree population. Photo by Tom Murray.

While trees can live for many decades, climate change is a quickly accelerating process. Therefore, land managers must consider future climate conditions when deciding what trees to plant. One strategy is to “cast a wide net,” seeding many different types of trees with the hope that some will thrive in the future. Another is to plant species that prefer warmer climates. The Climate Change Response Framework is an organization that connects scientists with land managers to help predict how tree species will respond to climate change and plan strategies for forest management. With the U.S. Forest Service, they developed the Climate Change Tree Atlas, which projects changes in tree range based on moderate and severe climate change scenarios. By making use of these tools and monitoring local conditions of their area, foresters are making decisions of what to plant where.

Our forests face an uncertain future. While forests have changed slowly over millennia from natural climate variations, human-caused climate change is altering our forests on a much shorter scale, in both natural and urban settings. Scientists are uncertain how our forests will look in the future, but recent studies of tree migrations have provided key trends. Understanding how climate change will affect trees will become a top priority for conservation so that we can identify the best strategies to protect our forests.

The post How Climate Change Affects Our Forests appeared first on Illinois Science Council.


Five Myths About Exercise

0
0

Exercise is a topic that’s muddied with mainstream misconceptions. Here are five common exercise myths, debunked by science.

Myth: Muscle soreness is caused by lactic acid.

Glycolysis is a reaction within the cell where a molecule of glucose, or sugar, is broken down into smaller molecules to release energy, ultimately producing lactic acid as a side product.

Accumulation of lactic acid slightly lowers the pH inside the cell, which messes up some steps of the glycolysis reaction, reducing the energy output of the process. The muscle can no longer get enough energy from glycolysis to maintain its contraction, and as muscle fatigue sets in, is forced to relax. To allow the muscle to keep exercising, the lactic acid is immediately secreted from the muscles as a waste product and cleared away by the bloodstream within seconds. This means that lactic acid does not linger around the muscles long enough to make them sore after exercise.

Glycolysis in the muscle produces lactate (lactic acid), which is carried away by the blood to be recycled back into glucose in the liver. This is called the Cori cycle.

So, what causes muscle soreness? It’s a subject that’s still being investigated, but right now the leading theory is that strenuous exercise inflicts mechanical damage to the structure of muscle cells. Cellular damage results in inflammation and a sensation of pain that goes away over time as the muscle heals. The most effective way to manage muscle soreness is the same as most other mechanical injuries: rest, cold compress, and anti-inflammatory drugs.

Myth: Losing muscle mass after not working out for a while sets you back to square one.

The human body is incredibly dynamic. It undergoes a wide variety of biological adaptations, some that take place within the cell and some that involve the entire body, in response to exercise. One of the most visible is muscular hypertrophy, an enlargement of muscle fibers to bear more weight. Hypertrophy is caused by an increase in the amount of proteins that help muscles contract and is one of the more temporary adaptations that tends to revert to baseline without regular training. Seeing this loss happen can be disheartening, but there are some exercise-induced changes that tend to stay around longer, and help your bulging biceps return faster, too.

This study by Mendias et al. shows how rat skeletal muscle fibers grow in size (hypertrophy) 3 days after bearing more weight.

One muscular adaptation to exercise is satellite cell fusion to damaged muscle fibers. Satellite cells are small muscle stem cells that hang out between larger mature muscle fibers. The damage to muscle cells from exercise activates satellite cells to fuse with the fiber, healing it. This means that the nucleus (the part of the cell where DNA is stored) of the satellite cell is permanently added to the many nuclei of the muscle fiber cells. DNA serves as the instructions that cells use to produce proteins, like those that make muscles contract and enlarge with training. The result is that a muscle fiber with more nuclei can produce these proteins faster to get stronger. Even though the proteins that make muscles big will degrade over time with no training, those nuclei stick around for life. So, the second time you work out after building muscle mass and then losing it, that mass comes back quicker than the first time.

This fluorescent image by Addicks et al. 2019 shows white arrows pointing to satellite cells (red) nestled between the much larger mature muscle cells (outlined in green). The blue dots are nuclei.

Muscles aren’t the only body part that improve with exercise; the nervous system is honed with training too. Muscle contraction doesn’t happen on its own — it must be stimulated by messages from spinal nerves first. Resistance exercise, like lifting weights and bodyweight exercises like planks, trains these nerves to fire faster and stronger to tell more muscle fibers to contract harder. This ultimately results in a quicker and more powerful exertion of force, and is a big part of increasing strength with exercise. Even after muscle mass diminishes when training stops, these neural adaptations tend to persist longer, and the muscle is still stronger than it was before training.

Myth: Exercising a certain muscle group reduces the fat mass surrounding that body part.

This notion is called spot training or spot reduction, and is misleading in regards to how body metabolism functions.

Exercises like push-ups, sit-ups, and other high-intensity, short-duration activities mainly utilize carbohydrates as fuel to sustain those sudden bursts of activity. Fat molecules called lipids, which are stored in the body’s fat depots, are broken down in a type of chemical reaction called oxidation. Fat-burning oxidation is more important for fueling long-duration cardio exercise, like running and swimming. That means that the type of exercise that reduces body fat is pretty different from the type of exercise that builds strength.

Furthermore, lipids that are secreted from fat cells don’t simply travel directly into the nearest muscle cells. They are released into the bloodstream as molecules called free fatty acids, and are carried all throughout the body to whichever muscle groups are in need of fuel for energy. So doing crunches every day, while good for your health, won’t eliminate belly fat by itself.

Lipolysis is the release of lipids from fat cells in response to energy demand.
Myth: Sweating a lot means you’re burning a lot of calories.

Sweat is important for regulating body temperature, especially during exercise, when the active muscles are generating heat. Generally speaking, more intense physical activity results in more sweating because of the body’s greater need to get rid of the heat produced by its muscles. But sweat is by no means a direct measure of energy expenditure, nor does the body “sweat out” calories, fat, or toxins through the glands on the skin (it’s just water and salt). There is also no evidence that exercising in the heat enhances calorie burning efficiency, and requires caution because of the resulting dehydration and risk of overheating.

Myth: The greatest benefit of exercise is weight loss.

Many people embark on exercise regimens in hopes of slimming down, but misleading conventional wisdom has conflated body fat content with overall health. While these factors are related, some of the most important benefits of exercise can be conferred independently of weight loss.

The two main types of fat tissue in humans are visceral fat — the kind that surrounds the body’s organs — and subcutaneous fat, the more innocuous jiggly kind. Regular exercise can trim down both types, and reducing visceral fat mass does significantly alleviate the risk of serious health complications like cardiovascular disease (CVD) and type II diabetes. However, individuals who engage in regular exercise experience lower rates of CVD deaths and decreased risk of type II diabetes than sedentary individuals, regardless of body fat mass. In fact, the CDC lists physical inactivity and being overweight as separate risk factors for development of type II diabetes.

This top-down abdominal MRI published by Golan et al. in 2012 shows visceral fat in green and subcutaneous fat in blue.

Resistance training, which has less of an impact on weight loss, still offers long-term health benefits. Muscular strength is a reliable independent predictor of mortality, indicating a crucial role in preventing the development of physical disabilities and chronic illnesses like CVD and kidney disease. Additionally, muscular strength is positively associated with both physical and mental quality of life in the elderly, promoting functional independence and minimizing chronic pain.

Despite the mainstream emphasis on weight loss, many of the most important health benefits of exercise don’t depend on a reduction in body fat. So even if you’re not seeing visible results with your exercise regimen, you are still taking good care of your health.

The post Five Myths About Exercise appeared first on Illinois Science Council.

If I didn’t HAB you: how bacteria work together in harmful algal blooms

0
0

Lake Erie is one of several Midwestern lakes that is plagued by harmful algal blooms (HABs) in the summer months. If you live near one, you may have seen the green “pond scum” floating on the surface or heard about the potent toxins that are released into the water, endangering both ecosystems and public health. Generally, HABs threaten the valuable services that freshwater ecosystems provide, like access to drinking and irrigation water, and can also kill fish and other animals.

Many severe blooms are caused by a group of prokaryotes called cyanobacteria, formerly known as “blue-green algae” even though they are not algae at all. These small, carbon dioxide-eating machines first began performing photosynthesis to produce oxygen billions of years before plants appeared on land, which set the stage for oxygen-consuming organisms to develop. Cyanobacteria can be found in environments around the world, from the open ocean where a few key groups act as primary producers that provide nearly half of the world’s oxygen to hot springs where they form spectacularly colorful mats and act as important nutrient cyclers.

While not all cyanobacteria are harmful, those that form cyanobacterial HABs, or cHABs, can cause serious problems when they grow to high densities. However, these cyanobacteria do not act alone – they have special sidekicks that help them survive and thrive. You may already be familiar with the concept of the human gut microbiota, a complex community of around one thousand trillion microbes that work together to help us digest our food and fight off gut infections, among their many functions. Similarly, many cyanobacteria in nature are also supported by communities of other, non-photosynthetic bacteria.

Cyanobacteria come in many colors.

One well-studied cyanobacterium is Microcystis aeruginosa, which causes cHABs in freshwater environments around the world, including many Midwestern lakes. Besides producing potent toxins called microcystins, M. aeruginosa also secretes extracellular polymeric substances (EPS). These sticky carbohydrate molecules join individual M. aeruginosa cells together into larger colonies and house the bacterial community, which provides the cyanobacteria with essential nutrients. In fact, M. aeruginosa produces more EPS in the presence of other bacteria compared to when it is grown alone, suggesting that it tries to recruit the community’s services by providing it a place to call home.

Although M. aeruginosa can grow without it, the bacterial community boosts the cyanobacterium’s growth and contributes to its worldwide distribution and dominance among bloom-causing cyanobacteria. Researchers have now started to take on the challenge of uncovering these communities’ inner workings, which could help us to understand how and why blooms form and persist.

Into the HAB-verse

As Lake Erie is no stranger to M. aeruginosa blooms, it is a fruitful place to conduct this research. For example, by sequencing the DNA of non-photosynthetic bacteria from Lake Erie blooms, researchers can predict their functions to better understand their interactions with M. aeruginosa. However, these functions remain speculative, as DNA sequencing alone cannot confirm that these functions are actually being carried out. In fact, relatively few interactions between cyanobacteria and non-photosynthetic bacteria have been directly observed in the lab – especially among freshwater communities.

One example of such an interaction between cyanobacteria and non-photosynthetic bacteria is carbon cycling, which is essential for maintaining the bacterial community. While cyanobacteria can produce energy through photosynthesis using only sunlight, water, and carbon dioxide, non-photosynthetic bacteria cannot make their own food. Their sustenance comes from the cyanobacteria, which release small molecules into the surrounding environment. Some more specialized species can also consume cyanobacterial EPS, as they can break down the complex carbohydrates it contains.

However, carbon exchange might not be a one-way street, as non-photosynthetic bacteria may also be capable of producing additional carbon dioxide for M. aeruginosa to use. This reciprocity could be particularly important when the bloom is in full swing. During this time, changes in the water’s chemical properties make carbon dioxide less available to the photosynthesizing cyanobacteria, hindering their growth. However, this added source of carbon dioxide could help to sustain the cyanobacteria even when blooms are very dense.

Keeping the bloom going

Like with carbon, efficient cycling of nitrogen is also crucial. Nitrogen is a key building block for DNA and other cellular molecules; when in low supply, it can limit cyanobacterial growth during blooms. In fact, non-photosynthetic bacteria seem to be the sole providers of this essential nutrient to M. aeruginosa, which, unlike some other cyanobacterial species, cannot take up nitrogen from the air. This nitrogen cycling service appears to be so efficient that it can sustain bloom-forming cyanobacteria when there is little nitrogen around.

Beyond contributing extensively to nutrient recycling in the cHAB ecosystem, non-photosynthetic bacteria can also produce or deactivate compounds that regulate the growth of their cyanobacterial partners. For example, several species of bacteria associated with M. aeruginosa blooms in Lake Erie are capable of producing auxins, hormones that promote the growth of both cyanobacteria and plants. Meanwhile, other bacteria can make vitamin B12, another essential nutrient that M. aeruginosa cannot produce itself.

Alternately, some bacteria can degrade freshwater pollutants such as benzoate, a compound that is detrimental to cyanobacterial growth. However, it appears that no single bacteria harbors the complete chemical pathway needed to degrade benzoate. Instead, different parts of this pathway appear to be distributed across multiple species of non-photosynthetic bacteria, which means that they might cooperate to make benzoate degradation happen.

Non-photosynthetic bacteria (green) physically associate with Microcystis aeruginosa (red). Source.
Community-aware HAB control

The relationship between cyanobacteria and bacterial communities has interesting implications for managing cHABs. One idea is to introduce bacteria that either degrade cyanobacterial toxins or kill the cyanobacteria directly into affected lakes. However, these approaches have not yet been tested in the field, and their effectiveness might be limited by the bacterial community. These native bacteria could, for example, compete with the introduced microbes for space and nutrients, shield the cyanobacteria from harm, or increase the overall resilience of the community through other means.

Unfortunately for aquatic ecosystems, cHABs are increasing in both frequency and magnitude on every content except Antarctica. Among the drivers of this global pattern is runoff of nutrients and human-made pollutants into aquatic ecosystems. This leads to eutrophication, a process in which cyanobacterial growth is boosted by an abundance of nutrients like nitrogen and phosphorus, often at the detriment of other aquatic species. Climate change may also play a role, as warming can prevent mixing between different layers of lake water, creating a stable surface that promotes cyanobacterial growth.

Finding effective management solutions for cHABs is more important than ever. However, strategies that target the supportive bacterial community are limited by our incomplete understanding of how this symbiotic relationship works. Additionally, due to their important role in nutrient cycling, scientists must consider the potential detrimental impacts of eliminating these bacterial communities on the entire freshwater ecosystem. However, using a combination of lab and field experiments, researchers will be able to further characterize the interactions between bloom-forming cyanobacteria and their bacterial communities and inform cHAB management approaches. While nothing short of a monumental effort, this will provide essential insights into how cHABs work, and how we can mitigate the damage they cause.

The post If I didn’t HAB you: how bacteria work together in harmful algal blooms appeared first on Illinois Science Council.

From Geysers to COVID Testing: The Crucial Contributions of Basic Research

0
0

On my walks around Chicago, I pass dozens of COVID-19 testing sites drawing people inside with sandwich boards that read “PCR testing”. While PCR’s gold-star status in scientific research was well-established before the pandemic, it has since worked its way from scientific jargon into common vernacular.

The medical adaptation of PCR for COVID-19 testing illustrates the journey scientific research can take. Discoveries begin with a foundation in basic research— a type of research that aims to uncover the fundamental properties that make life possible. Once these basic facts are better understood, translational research uses those principles to develop novel technologies to improve human life. Furthering scientific innovation requires supporting basic research to continue broadening our knowledge of the natural world.

PCR, or polymerase chain reaction, is a primary tool in biological research that mimics the natural process of making copies of genetic material, but rather than occurring in a cell, it achieves this process in a test tube. Specifically, PCR amplifies relatively small pieces of DNA. It was a revolutionary invention for scientific research because it allows scientists to amplify small amounts of genetic material of interest to yield enough genetic material for a multitude of experiments.

COVID-19 testing determines if there is any of the SARS-CoV2 virus’ specific genetic material in a patient sample. The sample undergoes a process that converts the genetic material of the virus, RNA, into a corresponding DNA sequence. The sample then undergoes PCR testing, which recognizes if this viral DNA is present and makes enough copies to register positive on the test.

PCR uses heat to untangle the double-stranded DNA helix into a linear, single strand of nucleic acids. Nucleotides, the individual building blocks of DNA, come in four different types—cysteine (C), guanine (G), adenosine (A) and tyrosine (T). C and G bind together while A and T are partnered. During the PCR process, nucleic acids in the single strand bind with their complementary nucleic acid building blocks.

Left alone, nucleic acids would bind with their partners quite slowly. But introduce an enzyme, a biological catalyst, and the reactions pick up the pace. PCR uses a heat-resistant enzyme called Taq polymerase as a wingman to match up the nucleic acids along the single strand template with their binding partners, creating a new double strand of genetic material. The new double-stranded helix can again be separated with heat, creating even more strands for the enzyme to pair up. When the original sequence’s complementary strand is matched up with its partnering nucleic acids, the original strand is regenerated. This cycle is continued numerous times, copying one strand into millions.

Cartoon showing the original double strand (dark blue) undergoing multiple cycles of PCR to separate into single strands that are then copied (light blue). Each new strand serves as a template to be copied in the following cycle. Illustration credit: Genome Research Limited.

Taq polymerase, the workhorse of PCR, was detected in an unlikely place over 50 years ago. This enzyme was discovered in 1969 by a research team studying bacteria living in the bubbling hot geysers in Yellowstone National Park. At high temperatures, the enzymes that enables the biological reactions of life can become inactive, like how frying an egg deactivates the proteins in egg whites, turning them from clear to white. The researchers discovered that these heat-loving bacteria had specialized enzymes that were active even at high temperatures, making them compatible in processes that require heat, like untangling DNA.

In 1983, Dr. Kary Mullis invented modern PCR by combining the background knowledge of known properties of DNA with the discovery of the heat-resistant Taq polymerase enzyme at Yellowstone. Almost 10 years later, Mullis was awarded the Nobel Prize in Chemistry for his invention. The Yellowstone researchers likely never imagined that poking around geysers would lead to such a revolutionary technology. The team’s quest for basic understanding of how life was possible at high temperatures paved the way for Mullis’ translational application.

A biofilm of bacteria from a geyser in Yellowstone National Park. Image Credit: Neal Herbert/ National Parks Service.

To fuel ongoing translational research, we must continue to expand our foundational knowledge. As a Ph.D. student at Northwestern University’s Driskill Biomedical Graduate Program, I conduct basic research determining how epithelial tissue grows and develops. The epithelium is a thin sheet of cells that lines organs, acting as a barrier and compartmentalizing the body. The skin is the most famous example of epithelium tissue, engulfing the entire body and protecting it from outside elements.

I use developing frog embryo skin to study epithelial growth. Frog embryo skin is like the epithelium that lines the lungs and fallopian tubes in that it also contains multiciliated cells (MCCs). MCCs are cells that have over 100 hair-like appendages called cilia emanating from the surface, like a surrealist sea anemone. In the lungs, cilia push down mucus to protect our respiratory organs; in the fallopian tubes, cilia guide egg cells down the tubes.

Image of multiciliated cells from embryonic frog (Xenopus Laevis) skin, with cilia shown in green. Image credit: Mitchell lab, Northwestern’s Feinberg School of Medicine.

MCCs embed into the epithelium through a process called radial intercalation. MCCs begin to develop below the surface and squeeze upward between neighboring cells to join the outer epithelium, like Destiny’s Child’s Kelly and Michelle rising from beneath the stage to join Beyoncé at the Superbowl.

To enter the epithelium, MCCs must bypass an obstacle course of tightly bound junction proteins that are responsible for making the epithelium a barrier. My research goal is to understand the communication between MCCs and their neighboring cells that allow MCCs access into the epithelium.

My research program is associated with Northwestern University’s Feinberg School of Medicine because of its translational applications in the medical field. While it may seem a stretch to connect studying embryonic frog skin to preventing cancer metastasis or developing synthetic organs, it’s not far off. Cancer cells also radially intercalate between cells to enter the bloodstream and spread throughout the body. Similarly, understanding how junction proteins respond to tension in the epithelium could help scientists shape flat sheets of epithelial cells into 3D, lab-grown mini organs for future organ transplantations.

Basic research might appear to be science for science’s sake. However, the wealth of knowledge that can be gained from such research has the potential to spur revolutionary breakthroughs that vastly improve our lives. Sometimes, a journey that begins with probing a hot geyser for biological life can end in PCR-based COVID-19 testing sites throughout Chicago.

The post From Geysers to COVID Testing: The Crucial Contributions of Basic Research appeared first on Illinois Science Council.

Mitochondria Are More Than Just the Powerhouse of the Cell

0
0

The extent of the average American’s knowledge regarding mitochondria is that seemingly-ubiquitous adage from high school biology class: mitochondria are the powerhouse of the cell. But not everyone knows just how true that is, or the many fascinating biological secrets packed into these tiny organelles.

Many important reactions in the cell need to harness the energy stored in the bonds of a molecule called adenosine triphosphate (ATP), which contains three chemical groups called phosphates. ATP is the energy currency of the cell — in fact, life as we know it cannot exist without the breakdown of ATP. ATP powers reactions when it loses one of its phosphate groups, forming adenosine diphosphate (ADP). That third phosphate has to get reattached to ADP in order for ATP to be used again, and that is the main purpose of the mitochondria.

ATP is broken down into ADP, phosphate, and energy to power some reactions in the cell. Image by Bioninja.

Mitochondria earn their powerhouse moniker because they are really efficient at making ATP. Efficiency refers to the portion of a fuel’s stored energy that a system uses, while the rest is lost as heat. Different systems use different fuel sources – for example, most automobile engines use gasoline, while mitochondria rely on food like sugars and fats. Mitochondria break down the bonds in sugar and fat molecules to release energy. They then use this energy to power a series of reactions that reattaches the third phosphate group to ADP, regenerating ATP as a source of energy the cell can use. Most car engines are between 20 and 30% efficient in extracting energy from gasoline to make a car move, while mitochondria harness about 40% of the energy from food molecules to produce ATP. The 60% lost as heat may seem like a big waste, but this is what maks our bodies warm, so even the “wasted” energy serves a purpose.

Cells with mitochondria are bestowed an incredibly powerful advantage. With such an abundance of usable energy, cells can make more proteins, get larger, move further, and communicate with each other. Some evolutionary biologists theorize that mitochondria are the reason that cells were able to start organizing into multicellular organisms, such as you and me.

According to the endosymbiotic theory of life, there was once a time millions of years ago when primitive eukaryotes (cells that contain a compartment for DNA called the nucleus) hadn’t yet acquired mitochondria. Eventually, some eukaryotes likely tried to eat mitochondria-like bacteria, failed to digest them, and ended up with new mitochondrial compartments. The eukaryote benefits from the immense amount of ATP energy produced by mitochondria, and the mitochondria enjoy access to nutrients and shelter from their hosts.

A cell consumes a bacterial organism in a process called endocytosis. This is how we think our cells gained the mitochondria.

A few intriguing traits of mitochondria serve as the pillars of the endosymbiotic theory. First, mitochondria are the only animal cell compartments that have their own DNA (called mtDNA). Several genes in mtDNA serve as the blueprint for proteins within the mitochondria that are required for ATP generation. In humans, the DNA in the nucleus comes from both parents, while mtDNA is passed down solely from the mitochondria packed into the egg. Single-celled organisms like bacteria similarly inherit all their genes from only one parent during replication, supporting the idea that mitochondria originated from bacteria.

Another piece of evidence for the endosymbiotic theory is the presence of double membranes surrounding mitochondria. Picture the cell as a magic water balloon that can pinch off parts of its outer latex without popping. Imagine gently pressing your index finger and thumb into the balloon, and then pinching off the pocket you made so that another mini balloon is now floating inside the water of the first. Other cellular compartments were formed this way – they have only one membrane, suggesting that the cell membrane folded in on itself to make little pockets. Now imagine that as you approach the water balloon, you press another, tiny water balloon against its surface and pinch off a pocket so that the tiny balloon is totally enveloped in latex from the big balloon. Unlike the first floating mini balloon, this one has two layers of latex (see figure of endocytosis process). The fact that mitochondria have two layers of membrane indicates that they already had their own membrane when the eukaryotic cell membrane folded around it. Since bacteria have a single membrane, it would make sense that mitochondria started as ingested bacteria.

This microscope image shows the structure of a mitochondrion, with the double membrane surrounding it. Photo from the George E. Palade EM Collection.

The two membranes are critical to the function of modern mitochondria, with different proteins embedded in each membrane and electrically charged atoms called ions stored in the space between. One important protein located on the inner mitochondrial membrane is ATP synthase, which carries out the final step of the series of reactions that create ATP. During the first steps of the process, hydrogen ions build up between the two membranes like water behind a dam and flow through the F0 region of ATP synthase (see figure of ATP synthase) like water turning a miller’s wheel. This motion causes the F1 domain, holding ADP and P, to change shape and smash them together to make ATP. The protein then releases ATP so it can be used by the cell. You can see a video representation of this remarkable reaction here.

The molecular structure of ATP synthase. Photo from the Protein Data Bank.

Even though mitochondria are clearly great at what they do in the realm of energy production, biomedical research is discovering additional roles for mitochondria in the cell. Mitochondria have been found to sequester certain proteins, regulating a cell’s response to stress. They can interact with other cellular compartments to mediate nutrient balance and signaling within the cell. The vast mitochondrial network undergoes constant remodeling: individual mitochondria fuse and split from one another, and parts of the network are broken down when they’re damaged or when the cell doesn’t need them. Mitochondrial network remodeling is particularly relevant to cancer, since tumor cells alter the production and use of energy so they can survive and divide out of control.

These images by Lee et al. show a mitochondrial network that is more broken up (top two panels) compared to a more fused network (bottom two panels).

Ultimately, mitochondria are more than just the powerhouse of the cell, though their supreme energy efficiency has certainly earned them that particular medal. They are also hosts to some of the most awe-inspiring clues to our evolutionary origins, amazing molecular machinery that is equal parts powerful and elegant, and avenues for superior understanding of disease.

The post Mitochondria Are More Than Just the Powerhouse of the Cell appeared first on Illinois Science Council.

Room for Dessert: Why We Crave Sweets Even When We’re Full

0
0

We’ve all experienced that feeling of being completely full after a meal, yet still having room for dessert… maybe even craving a little something sweet. While you may have heard family members and friends refer to this gastronomic phenomenon as the “second stomach for dessert,” research scientists often use the term hedonic hunger, meaning the desire to consume foods for the purposes of pleasure and in the absence of physical hunger. Why does this happen? Let’s start by taking a look at the mechanisms involved in hunger and appetite control.

Regulation of food intake

Our digestive system is in constant communication with the brain, sending signals back and forth to ensure that we’re getting enough nutrients to meet our body’s needs. When our body senses an energy shortage, it sends out a variety of hunger signals that drive us to eat. A major player influencing our decision to start eating is a hormone called ghrelin. This “hunger hormone” is produced primarily by the cells lining the stomach to stimulate appetite in response to low energy or in anticipation of a meal. During a meal or snack, our body senses an increase in available energy and suppresses these hunger signals. At the same time, it starts to send out signals that we are full through the appetite-suppressing hormone leptin. In other words, our digestive system and our brain communicate about whether we have sufficient calories and respond accordingly by signaling us to eat or to stop eating. However, there are several factors that may cause us to eat in the absence of an energy need.

Gut hormones, including ghrelin, act as signals communicating between the brain and digestive tract. The brain matches these signals with information such as taste and smell stimuli, learned associations, pleasant sensations (hedonic) and energy needs to influence eating behavior. Brain region abbreviations: PFC, prefrontal cortex; NAc, nucleus accumbens; VTA, ventral tegmental area; Hypo, hypothalamus; NTS, nucleus tractus solitaries. Clemmensen et al., 2017, Cell.
Reward-based eating

Like many pleasurable behaviors, eating tasty food leads to the release of dopamine, a signaling molecule that plays an important role in feeling pleasure. While all tasty foods can cause a dopamine rush, sweet and fatty foods are highly pleasure-inducing. Over time, we develop associations between the stimuli linked to delicious foods, such as the sight, smell or even thoughts of these foods, with the rewarding feeling we get from the dopamine release. This process, called conditioning, can cause a dopamine surge in anticipation of a yummy treat, motivating us to eat. This phenomenon has been demonstrated in mice conditioned to receive a sweet reward (a 20% sucrose solution) after a five second audiovisual cue. In these conditioned mice, dopamine activity increased in response to the conditioned stimulus, as well as the sweet reward. Similarly, brain imaging studies in humans have shown that dopamine levels spike in response to the sight, smell and taste of food without actually consuming the food, which increases the desire to eat. Dopamine is so important in the motivation to eat that mice lacking the ability to produce dopamine die of starvation.

The “hunger hormone” ghrelin is also involved in our craving for desserts. Studies have shown that ghrelin shifts rodents’ preferences towards sweet and fatty foods even if they are not hungry. For example, rats that can’t respond to ghrelin signaling ate less of a cookie dough treat after a full meal compared to those that could. Similarly, mice that can’t produce the active ghrelin hormone eat less of a high-fat dessert after a full meal compared to mice that can. These studies reveal that ghrelin is involved in the drive to consume food for pleasure, even without the need for calories.

In a study by Thanarajah et al., milkshake consumption caused immediate dopamine release due to pleasant taste sensations, as well as delayed dopamine release that was likely due to signaling following food consumption (post-ingestive signaling). Image modified from Thanarajah et al, 2019, Cell Metabolism.
Sensory-specific satiety

Another important aspect involved in our desire to eat desserts on a full stomach is something called sensory-specific satiety, which occurs when a person is less eager to continue eating a food that they’ve already eaten compared to a “new” food. In one landmark study on sensory-specific satiety, participants received a four-course meal of either the same four dishes or four different dishes. The participants that received four different courses consumed 60% more calories compared to those that received four identical courses, primarily because of the perceived pleasantness of the new foods. In another study, participants were given fries and brownies to eat with or without condiments (such as ketchup and vanilla cream). They ate more and gave higher pleasure ratings when the food was accompanied with condiments. Essentially, when our brain has lost interest in a certain food we perceive a feeling of fullness, whereas our appetite returns when we’re given the option to try a new food or even a new flavor (such as adding ketchup to our fries). In terms of having room for dessert, our brains may be bored of the main dish, but dessert serves as a new stimulus, reactivating our desire to eat.

Exposure to new foods or flavors makes eating more pleasant and renews appetite, preventing sensory-specific satiety, the decline in satisfaction that occurs when continuing to eat the same food.
Image obtained from Pixabay.
Making room for dessert

When we’re presented with the option of dessert after a filling meal, reward-seeking behavior and sensory-specific satiety trick the brain into wanting more. These signals override the fact that we’re already full and don’t have a physiological need for calories. When simply thinking about food or visualizing food can influence the levels of signaling molecules like dopamine and appetite-related hormones like ghrelin, it’s not surprising that many find it hard to resist the temptation of dessert.

The post Room for Dessert: Why We Crave Sweets Even When We’re Full appeared first on Illinois Science Council.

Lagrange points: A lesson in gravity and a path to space exploration

0
0

Years ago, I parked my car and dashed into a neighborhood shop, only to find that that my car had rolled downhill. Fortunately the slope was not too steep, and the car was stopped by a steep curb about 40 feet away. Ignoring the gawking passersby, I jumped in and drove off, embarrassed to have shown such a poor command of gravity. 

As it turns out, an understanding of gravity is also critical to knowing where to park objects like telescopes, satellites and stations in space. The Earth-Sun system has five special locations called Lagrange points, named after the Italian-French mathematician Joseph-Louis Lagrange who investigated the complex relationship between three separate objects. At the Lagrange points, the gravitational attraction of the Earth and Sun is equal to the centripetal force that allows the Earth to rotate the Sun. Therefore, an object stationed at one of the Lagrange points tends to stay at this position as the Earth orbits the Sun.

Lagrange points L1, L2, and L3 are aligned with the Earth and Sun, while L4 and L5 each form an equilateral triangle with the Earth and Sun. Because gravity and centripetal force depend on mass, the Lagrange points at these triangle positions are especially stable when the ratio of the larger mass to the smaller mass exceeds 24.96. Fortunately, the Sun is about 333,000 times heavier than the Earth, so L4 and L5 are fixed in position, making them good candidates for future space station or asteroid mining locations. Currently, some space rocks and dust have accumulated at these steady points, like dust collecting in the corner of a room.

The Earth-Sun system’s five Lagrange points.

As the Earth circulates the Sun, the linear Lagrange points (L1, L2 and L3) experience about 23 days in which they can slowly float away from their position over time. This period of slight instability means space telescopes at these positions must use some propellant to remain aligned. One such telescope is the James Webb Space Telescope (JWST), which reached the L2 position about a million miles from Earth in January. At nearly 2,700 times farther from Earth than the Hubble Space Telescope, it will be unserviceable, even if we still had a working space shuttle. However, the team hopes its distant location will provide observations of deep space and the universe’s far-off past.

The Earth-Sun system isn’t the only place Lagrange points are found. In fact, five Lagrange points exist between any two massive bodies. For example, a slew of asteroids, bits of rock and ice, comets and dwarf planets have gathered at the L4 and L5 points between Jupiter and the Sun. Additionally, the L2 point in the Earth-Moon system has served as the location for communications satellites covering the far side of the moon, and L4 and L5 between the Earth and the Moon would be especially stable for permanent space stations.

Researchers have also suggested placing shields at the L1 position directly between the two planetary bodies for various applications. For example, NASA scientists have proposed placing a magnetic shield at the Mars-Sun L1 point to make Mars more amenable to human exploration and settlement. This technology could create an artificial magnetic field that might restore the red planet’s protective magnetic shield and protect those on Mars from destructive solar wind.

In addition to their role in space missions, Lagrange point shields have been explored for their potential to combat climate change. One idea is to place small shields at the Earth-Sun system’s L1 point to block some of the sunlight reaching Earth, thereby (theoretically) reducing global warming. A similar proposal is to place a special lens at L1 to scatter the Sun’s rays and decrease the amount of light reaching the Earth. These extreme measures would be expensive and difficult to optimize to ensure that Earth still gets enough sunlight, but climate change is a desperate problem.

These applications of Lagrange points might sound like science fiction, but many exciting advances start out that way. For now, we can sit back and enjoy the fascinating images from the JWST at Lagrange point L2—and remember the importance of gravity when parking our cars. 

The post Lagrange points: A lesson in gravity and a path to space exploration appeared first on Illinois Science Council.

You are WHEN you eat: How feeding schedules can synchronize the body’s circadian clocks

0
0

In recent years, a type of intermittent fasting called Time-Restricted Eating/Feeding (TRE/TRF) has received unprecedented attention in the wellness world. This diet involves only eating during a defined window of time in the day, usually spanning eight to twelve hours. Studies have suggested that this type of structured eating may have a wide range of benefits, including promoting weight loss, supporting cognitive health and even reducing cancer risk. How can one simple practice have so many diverse effects on health?

The answer comes down to our body’s circadian rhythm regulation system, which functions similar to an orchestra. In the same way an orchestra has many different musicians working together to produce a beautiful song, your body has biological clocks ticking away in every cell. And similar to the conductor, there is a master clock in your brain that keeps all of the biological clocks in the body synchronized through physiological and hormonal signals. Just as a conductor can respond to the way the orchestra is playing, the master clock can receive signals back from the players about their performance. This feedback creates a cycle of information that allows all the clocks to adapt to changes and remain in synch. In a perfect world, this biological symphony functions without error.

However, modern society has produced many challenges for this system. Due to the advent of electricity and electronics, we are increasingly exposed to light at night. The conductor clock in our head responds primarily to light cues, and exposure to artificial light when it should be dark can throw it off. An imperfect conductor leaves the rest of the system to fend for itself, causing the body’s clocks to become desynchronized. And because of the built-in feedback loop within the system, these disorganized time cues can reach the brain, making it even more difficult to restore synchrony among all the body’s clocks. This desynchrony is referred to as circadian misalignment. You don’t have to look too far to see the consequences of this misalignment – it’s what is occurring when you experience jet lag, when you are adjusting to daylight savings time, and even why Mondays can feel so terrible after an irregular weekend sleep schedule. In all these situations, the different clocks within your body are receiving disorganized time cues and are struggling to readjust.

Electricity and technological devices expose us to light at night, confusing the conductor circadian clock in our brain and driving circadian misalignment. (Image credit: Unsplash)

When it happens occasionally, circadian misalignment is not a fun experience but does no real lasting harm. However, when this misalignment happens chronically (as in the case of shift work), it can have significant effects on health. Between 15-20% of the workforce in the United States is comprised of shift workers (1), who work out of sync with the light-dark cycle. There are strong correlations between shift work, cancer, and obesity, particularly in women. One study showed that five years of night shift work in women slightly increased breast cancer risk (2). Additionally, in experiments where participants were subjected to just 10 days of simulated shift work, subjects exhibited markers of a pre-diabetic state (3). These stark health consequences are likely related to the role of the ‘conductor’ clock as a key regulator of genes involved in metabolism, clearance of old, damaged, or abnormal proteins, and DNA repair processes (4). Shift work is a dramatic demonstration of the concept of circadian misalignment, but this chronic desynchrony is becoming a reality even for those who work standard 9-5 jobs but are not in sync with the environmental light-dark cycle.

So how can we get this system back on track in spite of modern society’s interference? Research points to the timing of one potent time cue: food. In a perfectly functioning system, the central clock tells our bodies when we should eat, priming the pancreas, liver and gut to break down and absorb the nutrients from food. However, the feedback-based desynchronization of the central clock means we rarely adhere to the ideal eating schedule. One team of researchers investigated this relationship through a smartphone app that tracks feeding behavior. They discovered that participants ate erratically during waking hours, stopping food intake for extended periods of time only during sleep (5). This constant food intake means the body is unable to predict when to prime the metabolic system to most efficiently process these calories, throwing off the body’s circadian clocks – and, in turn, the master clock in the brain.

Time-Restricted Feeding (TRF) solves this problem by giving our body a predictable time window for calorie intake, helping to realign our metabolic clocks. Several studies have demonstrated the metabolic benefits of this practice. One study in mice showed that mice fed during a restricted window lose weight despite ingesting the same calories as free-fed mice, and when fed a high fat diet (similar to the average Western diet), TRF mice were protected from obesity. In humans, when obese and overweight participants shrank their food intake window to 8-12h, they lost weight, improved their sleep, and reported increased energy levels (6-8). 

The metabolic alarm clock generated by time-restricted feeding also serves as an important feedback signal to align the multitude of clocks around your body. TRF may have benefits for cognitive health, as it has been shown to improve symptoms of depression and anxiety in response to shift work in rodents (9). In other mouse studies, TRF has been shown to reduce symptoms of the neurodegenerative diseases Huntington’s and Alzheimer’s (10,11) and decrease cancer incidence (12).

In a world full of erratic circadian signals, the timing of food is a simple and accessible intervention to combat misalignment and provide a wide range of health benefits.

References:

  1. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C14&q=mcmenamin+2007+shift+work&btnG=&oq=McMenamin%2C+2007+shi
  2. https://aacrjournals.org/cebp/article/27/1/25/71410/Night-Shift-Work-Increases-the-Risks-of-Multiple
  3. https://www.pnas.org/doi/abs/10.1073/pnas.0808180106
  4. https://www.sciencedirect.com/science/article/abs/pii/S2405803319301414
  5. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4635036/
  6. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4635036/
  7. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6004924/
  8. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6953486/
  9. https://onlinelibrary.wiley.com/doi/10.1002/jnr.24741
  10. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5752678/
  11. https://alz-journals.onlinelibrary.wiley.com/doi/abs/10.1002/alz.052723
  12. https://www.cell.com/cell-metabolism/fulltext/S1550-4131(15)00224-7

The post You are WHEN you eat: How feeding schedules can synchronize the body’s circadian clocks appeared first on Illinois Science Council.


What your nose can tell you about mental health

0
0

Lately, our noses have been getting a lot more attention. COVID-19 changed the way our noses interact with the outside world, from covering them up with a mask to experiencing diminished sense of smell as a symptom of the virus (as noted by the association between viral outbreaks and poor reviews for scented candles)1. Of the over 500 million people that have now contracted COVID worldwide2, about 38% develop anosmia, or loss of sense of smell3. Amid the booming interest in the role of olfaction in our physical health, an often-overlooked phenomenon is the relationship between our sense of smell and mental health.

Olfactory dysfunction has long been linked to depression. In one study, patients with depression exhibited reduced olfactory function compared to non-depressed controls, and symptoms of depression worsened with the severity of olfactory impairment4. Reduced olfactory function in patients with depression can be recovered by treatment with antidepressants5. Experts believe the relationship between depression and olfaction is bi-directional, meaning depression may pre-dispose an individual to olfactory dysfunction, or olfactory dysfunction may predispose an individual to depression. To further disentangle this relationship, some researchers have investigated how changes in the brain area responsible for olfaction impacts symptoms of depression.

The olfactory bulb (OB), a brain area responsible for processing odor information, is impacted by depression. Studies show that patients with depression have smaller OBs than non-depressed individuals6. Lesioning, or surgically removing, the OB in rodents induces several behavioral changes that mirror symptoms of human depression. These include hyperactivity, deficits in memory, and changes in food-motivated behavior, all of which can be reversed by treatment with antidepressants7. In fact, the OB lesion rodent is the only animal that accurately models the long-term human response to antidepressant drugs8.

The olfactory system is the only sensory system that relays information directly from the sensory organ (i.e. nose) to the brain (i.e. OB)9. For other sensory organs (such as ears, mouth and skin), information is first condensed and sent through a structure called the thalamus, which acts as a middleman, before making its way to the sensory cortex. The olfactory system’s unique access to the brain means that a particular smell can directly initiate a cortical response.

The olfactory system is the only sensory system that relays information directly from the sensory organ to the brain, bypassing the thalamus. (photo credit: Neuroscientifically Challenged10)

In particular, the importance of the OB in depression relates to its connection with brain areas involved in emotion and memory, particularly the amygdala and hippocampus11. The amygdala is involved in processing threatening stimuli12, while the hippocampus has a major role in learning and memory and is affected by many psychiatric disorders13. Patients with depression exhibit increased resting-state activity in the amygdala as well as exaggerated reactivity in response to negative emotional stimuli14. Additionally, prolonged depression is linked to cell death and loss of volume in the hippocampus.

Why would the olfactory system be so integrated with brain areas crucial for emotional regulation and memory processes? The answer likely lies in adaptive evolution, as odors provide critical information for survival, influencing everything from our social relationships to our food intake15-17. Today, dysfunction in the OB-amygdala-hippocampus circuit underlies many of the behavioral, immunological, and neurochemical changes associated with depression.

Depression is linked to dysfunction within the olfactory system. (photo credit: UnSplash)

Changes in emotion and memory processing in response to olfactory dysfunction present important implications for COVID-19 patients who lose their sense of smell either temporarily or permanently due to the virus18. Over half of a sample of COVID survivors in the United States reported symptoms of depression months after recovery, with those who experienced more severe symptoms of the virus exhibiting increased depression19. There are many potential factors that contribute to this trend, including changes in energy level and behavior resulting from the body’s immune response and the psychological stress of contracting COVID20,21. However, a recent study found that only the severity of smell and taste loss during COVID infection, and not more life-threatening symptoms such as shortness of breath or fever, correlated with depressed mood22. While further studies will need to be conducted to explain this finding, the relationship between the intensity of olfactory impairment and depression suggests that the OB circuit may be playing a role. With so many people facing the aftermath of COVID-19 illness, paying attention to the impact olfaction has on mental health will become more important than ever.

References:

  1. Analysis | What negative candle reviews might say about the coronavirus. Washington Post.
  2. Weekly epidemiological update on COVID-19 – 1 June 2022. https://www.who.int/publications/m/item/weekly-epidemiological-update-on-covid-19—1-june-2022.
  3. Mutiawati, E. et al. Anosmia and dysgeusia in SARS-CoV-2 infection: incidence and effects on COVID-19 severity and mortality, and the possible pathobiology mechanisms – a systematic review and meta-analysis. F1000Research 10, 40 (2021).
  4. Kohli, P., Soler, Z. M., Nguyen, S. A., Muus, J. S. & Schlosser, R. J. The Association Between Olfaction and Depression: A Systematic Review. Chem. Senses 41, 479–486 (2016).
  5. Croy, I., Symmank, A., Schellong, J., Hummel, C., Gerber, J., Joraschky, P., & Hummel, T. (2014). Olfaction as a marker for depression in humans. Journal of Affective Disorders, 160, 80–86. https://doi.org/10.1016/j.jad.2013.12.026
  6. Rottstaedt, F. et al. Size matters – The olfactory bulb as a marker for depression. J. Affect. Disord. 229, 193–198 (2018).
  7. Kelly, J. P., Wrynn, A. S. & Leonard, B. E. The olfactory bulbectomized rat as a model of depression: An update. Pharmacol. Ther. 74, 299–316 (1997).
  8. Morales-Medina, J. C., Iannitti, T., Freeman, A. & Caldwell, H. K. The olfactory bulbectomized rat as a model of depression: The hippocampal pathway. Behav. Brain Res. 317, 562–575 (2017).
  9. Purves, D. et al. The Organization of the Olfactory System. Neurosci. 2nd Ed. (2001).
  10. @neurochallenged. Know Your Brain: Olfactory Bulb. @neurochallenged https://neuroscientificallychallenged.com/posts/know-your-brain-olfactory-bulb.
  11. Rochet, M., El-Hage, W., Richa, S., Kazour, F. & Atanasova, B. Depression, Olfaction, and Quality of Life: A Mutual Relationship. Brain Sci. 8, 80 (2018).
  12. Baxter, M. G. & Croxson, P. L. Facing the role of the amygdala in emotional information processing. Proc. Natl. Acad. Sci. 109, 21180–21181 (2012).
  13. Anand, K. S. & Dhikav, V. Hippocampus in health and disease: An overview. Ann. Indian Acad. Neurol. 15, 239–246 (2012).
  14. Ramasubbu, R. et al. Reduced Intrinsic Connectivity of Amygdala in Adults with Major Depressive Disorder. Front. Psychiatry 5, (2014).
  15. Athanassi, A., Dorado Doncel, R., Bath, K. G. & Mandairon, N. Relationship between depression and olfactory sensory function: a review. Chem. Senses 46, bjab044 (2021).
  16. Brand, G. & Schaal, B. L’olfaction dans les troubles dépressifs : intérêts et perspectives. L’Encéphale 43, 176–182 (2017).
  17. Croy, I. & Hummel, T. Olfaction as a marker for depression. J. Neurol. 264, 631–638 (2017).
  18. Agyeman, A. A., Chin, K. L., Landersdorfer, C. B., Liew, D. & Ofori-Asenso, R. Smell and Taste Dysfunction in Patients With COVID-19: A Systematic Review and Meta-analysis. Mayo Clin. Proc. 95, 1621–1631 (2020).
  19. WebMD. The Link Between COVID-19 and Depression. WebMD https://www.webmd.com/lung/covid-19-depression (2022).
  20. Larson, S. J. & Dunn, A. J. Behavioral Effects of Cytokines. Brain. Behav. Immun. 15, 371–387 (2001).
  21. Wright, C. E., Strike, P. C., Brydon, L. & Steptoe, A. Acute inflammation and negative mood: Mediation by cytokine activation. Brain. Behav. Immun. 19, 345–350 (2005).
  22. Speth, M. M. et al. Mood, anxiety and olfactory dysfunction in COVID‐19: evidence of central nervous system involvement? The Laryngoscope 10.1002/lary.28964 (2020) doi:10.1002/lary.28964.

The post What your nose can tell you about mental health appeared first on Illinois Science Council.

Sonoluminescence: Where sound and light meet

0
0

A solar cell turns light from the sun into electricity. A car’s engine turns the heat of burning gasoline into mechanical motion. A battery turns electrochemical reactions into power. There are many types of energy transformation, but one of the most intriguing is sonoluminescence—a phenomenon in which bubbles turn sound into light.

Sonoluminescence was first observed in the 1930s, when photographic plate developers applied ultrasound waves (sound waves that we can’t hear) to photographic film and noticed that it became foggy. They discovered that bubbles had accumulated in the photography development liquid and suspected a connection between the fogged film and the bubbles.

Their suspicion was correct. The pressure that built up from the pulsing of the ultrasound waves generated bubbles in the liquid. In the moment before the bubbles popped, they absorbed energy from the ultrasound waves and emitted this energy as sub-second bursts of light. The light generated from this process, called multi-bubble sonoluminescence (MBSL), left behind small dots on the photographic film, causing it to appear foggy.

In further studies of sonoluminescence, scientists experimented with different parameters like the frequency of the sound waves and the pressure built up in the liquid. In the 1990s, one researcher tweaked the conditions to create one large bubble instead of many small ones. When this large, stable bubble absorbed the sound energy just before popping, it produced a series of bluish flashing lights. The strobe light persisted for several days, where each individual flash resulted from a cycle of the ultrasound wave. This process is known as single-bubble sonoluminescence (SBSL).

So, how exactly is energy from sound converted to light? Scientists are still grappling with this question, and many theories and models have been proposed. The most accepted model is the hot spot model, which suggests that the energy from the sound waves causes the gas inside the bubbles to become extremely hot, which could create light through two possible mechanisms. The first is that the gas molecules accelerate inside the bubble as they heat up, and right before the bubble bursts, this motion is given off as packets of light. The second is that the hot bubble emits light according to its temperature, similar to the way stars shine.

Scientists are interested in understanding more about the physical and chemical conditions inside a bursting bubble so that they can use sonoluminescence for exciting therapeutic, diagnostic and research applications. Ultrasound waves can penetrate deeply to reach tissues in the body that standard light sources can’t. Therefore, sonoluminescence could be used to generate bubbles in bodily fluids that produce light to activate drugs. In one study, researchers used sonoluminescence to switch on a light-triggered drug that targets cancer cells. The light emitted during the sonoluminescence process may also be used to image specific tissues of the body to help diagnose disease, according to the study.

Additionally, as the chemical conditions of the bubbles influence how much light is emitted during sonoluminescence, scientists can measure the light to gain insight into the chemical composition of the liquid. In one study, scientists observed a correlation between the amount of sonoluminescence and the quantity of argon and oxygen in various samples.

Although sonoluminescence isn’t ready for direct therapeutic or diagnostic use yet, it’s the subject of active research studies. Maybe another unexpected observation will yield an exciting new development in this bubbly science story.

References:

  1. “Sonoluminescence” book for the author F. Ronald Young.
  2. “Acoustic cavitation and bubble dynamics” book for the author Kyuichi Yasui.
  3. https://www.researchgate.net/publication/2843747_Single-Bubble_Sonoluminescence
  4. https://asa.scitation.org/doi/full/10.1121/1.4929687
  5. https://pubs.acs.org/doi/10.1021/acsami.9b07084
  6. https://www.frontiersin.org/articles/10.3389/fbioe.2019.00466/full
  7. https://pubs.acs.org/doi/10.1021/acsami.9b07084
  8. https://pubs.acs.org/doi/abs/10.1021/jp003226x

The post Sonoluminescence: Where sound and light meet appeared first on Illinois Science Council.

Shedding Light on Lampenflora

0
0

Deep inside Belize’s Actun Tunichil Muknal (ATM) cave, I discovered a pallid seedling on the bank of a subterranean river. It had failed to develop past its embryonic stage, naked except for its cotyledons, the precursors to its real leaves. Perhaps a passenger on an unwitting tourist’s foot, it had found itself in an inhospitable new habitat. ATM is heavily regulated, allowing only 125 visitors per day, and boasting no permanent light fixtures. Without light, this unlucky seedling was doomed to a premature demise.

The entrance to ATM cave.

In unaltered cave environments, there is no niche for photosynthetic organisms (like the unfortunate seedling) that rely on light from the sun for energy. A cave does not meet the conditions they need to survive, and they do not play a role in sustaining other organisms in the cave ecosystem. In heavily touristed caves (so called “show caves”), however, light introduced by human presence can carve out a niche for photosynthesizers, allowing them to flourish. This phenomenon has a charmingly literal Germanic name: lampenflora.

A seed introduced into a show cave with permanent lighting, such as Meramec Cave outside Saint Louis, will have a far different fate than the doomed seedling in ATM cave. Visitors to Meramec will quickly notice that the walls of the cave near the installed lights are lush with an explosion of lampenflora. Plant life is thickest close to the light fixtures, where moss and even small ferns grow. Farther away, the photosynthetic life fades to a faint green sheen of green algae and cyanobacteria on the cave wall, called a biofilm. These single cellular organisms are called “pioneer species” because their colonization of this environment paves the way for other organisms, such as the mosses and ferns, to settle. Thus, the bullseye formation of the lampenflora happens because the areas closest to the light were colonized first, and so reached their mature form (called a “climax community”) first, whereas areas farther away are still in the early stages of development.

A curtain formation in Meramec Cavern.

While human presence in caves might be good news for photosynthetic organisms, it can shift the balance of the ecosystem, disrupting the native species. Research directly studying the effects of lampenflora on cave ecosystems is limited, but one study conducted in Indonesia uncovered shifts in communities of arthropods (a category including insects and spiders) in caves with lampenflora. The researchers sorted arthropods into two groups based on their role within the cave’s ecosystem: decomposers and predators. Decomposers are typically more common in cave environments because they can consume a broad range of waste as food sources. However, the researchers discovered an increased proportion of predatory arthropods in heavily touristed caves with lampenflora.

The researchers propose that this change in community composition could contribute to destabilizing the cave ecosystem, disrupting native organisms in the cave and beyond. Some species, called trogloxenes and troglophiles, share their time between the cave and the outside environment. Through interactions with both of their habitats, these species can enable changes inside the cave to ripple into the outside world. Therefore, the establishment of lampenflora could have far-reaching effects on the surrounding environment.

While the researchers attribute these population shifts to lampenflora, they don’t control for other factors, such as temperature and humidity, that may be affected by human presence in these caves. While it is clear that humans can disrupt native cave communities, more work is clearly needed to untangle which variables are responsible for these changes.

In addition to lampenflora’s potential role in shifting ecosystem balance, the green oases surrounding light fixtures can present a challenge for the conservation of both natural and anthropogenic cave artifacts. As part of their metabolic processes, the organisms that comprise lampenflora interact chemically with the walls of the cave, causing pH fluctuations that are destructive for cave formations and art. In fact, much of the literature discussing lampenflora focuses on its management and reduction.

Lampenflora in the Kubacher Kristallhohle.

Lampenflora are a fascinating, and in many cases beautiful, phenomenon. The process by which they colonize new environments is a powerful testament to life’s resilience. They are also a particularly poignant visual reminder of our power to drastically transform ecosystems throughout the biosphere. Though the changes wrought by human presence in caves are especially obvious and visually striking, they certainly do not stand alone. Lampenflora serve as a symbol of our human ability to create profound ecosystemic change through our mere presence.

The post Shedding Light on Lampenflora appeared first on Illinois Science Council.

Genetic Leapfrog: How Zoonotic Viruses Jump Species

0
0

Eating a porkchop. Getting a mosquito bite. Playing with your dog.

Interactions with animals are a common yet significant part of the human experience. While most animal encounters are harmless, some can pose serious threats to human health.

Animals harbor infectious agents called “zoonoses” that can spread to humans and cause severe illness (1). We are most likely to become infected by animals such as pigs, cattle, or rodents that play important roles in our daily lives as food sources or cohabitants of our environments. In recent years, there has been a sharp increase in the number of emerging zoonoses as the demands of the modern world reshape human-animal interactions (2, 3). Changes in land use and habitat destruction have diminished biodiversity and forced many animals into closer contact with humans, increasing interactions that can lead to viral transmission (4). Additionally, global interconnection and the ease of international travel make it much easier for new pathogens to rapidly spread around the world. The recent emergence of zoonoses such as the COVID-19 coronavirus SARS-CoV-2, which is believed to have come from bats, and the monkeypox virus, which originates in rodents, provide particularly salient examples of the effects zoonotic infections on public health (5, 6).

On a biological level, several factors help zoonotic viruses like SARS-CoV-2 jump species boundaries from animals to humans. One is a virus’s ability to enter the human cell. Entry is regulated by interactions between proteins on the surface of the virus and proteins called receptors that are embedded in the cell membrane. The virus also interacts with receptors in the cell membrane of its animal host, so if humans possess the same type of receptors, entry into the human cell is possible. For example, SARS-CoV-2 surface proteins bind to angiotensin-converting enzyme (ACE2), a receptor found in many animals such as sheep, goats, and bats (7). The ACE2 receptor is also found on the surface of human throat and lung cells which allows binding of SARS-CoV-2 surface proteins (8). Viral surface proteins evolve over time and accumulate genetic changes that can improve receptor binding and increase the chances that the virus can enter the cell (9). 

But in order to cause respiratory illness, the virus must be able to replicate. To do so, it must evade detection by our innate immune system– the first line of defense against foreign pathogens. The virus and the infected cell engage in an immunological game of tug-of-war to gain control over host cell processes. For example, upon detecting the virus inside the cell, the cell produces small molecules known as interferons that inhibit viral infection. SARS-CoV-2 counters this immune defense by producing a specific protein that can suppress interferon production (10). Viruses that can successfully evade the host immune response can then strongarm the cell to create an environment conducive to viral replication.

Viral diversity is another important factor in zoonotic virus transmission. The more versions of a virus there are, the more likely there will be one that can effectively enter the human cell, evade the immune system and replicate. We can think of the process of virus transmission as a marathon in which the award at the end is successful infection of a new host. Changes to an original virus may grant it better running shoes or higher stamina, helping it reach the finish line. 

A virus’s ability to enter a cell depends on interactions between viral surface proteins and receptors on the cell surface. (Photo Credit: CDC on Unsplash).

These changes can occur in the form of mutations. Coronaviruses, for example, store their genetic information as RNA and during replication, an enzyme known as an RNA-dependent RNA polymerase generates many new copies of this viral RNA. The enzyme isn’t error-free and may introduce mutations to the RNA sequence during replication (11). Another enzyme known as an exoribonuclease cleaves nucleotides from RNA to help edit the errors introduced during replication, but it also isn’t perfect. (12). The variability in both mutations and how they are edited can create new viral variants to participate in the marathon.

Another source of viral diversity is recombination, which occurs when the RNA polymerase enzyme switches from the original RNA sequence it is copying to a new region of the virus or to an entirely new source of genetic material in the cell (13). Like mutations, recombination can produce a viral variant that has a better chance at overcoming genetic and immunological barriers associated with infecting a new host.

The ability of a virus to jump species boundaries is a complex relationship between viral evolution and genetics and the human immune response. As we continue to navigate the ongoing COVID-19 pandemic, monkeypox outbreak and concerns over the viruses that will inevitably emerge in the future, scientists are developing methods to detect and genetically classify as many viruses as possible and to evaluate their potential to spread and cause disease. (14, 15). These sequence databases will be invaluable tools in efforts to detect existing pathogens and those that have yet to be discovered.

References:

  1. https://www.cdc.gov/onehealth/basics/zoonotic-diseases.html
  2. https://www.nature.com/articles/d41586-020-02341-1
  3. https://www.nature.com/articles/d41586-022-01198-w
  4. https://www.theatlantic.com/magazine/archive/2020/09/coronavirus-american-failure/614191/
  5. https://www.science.org/doi/10.1126/science.abh0117
  6. https://www.who.int/news-room/fact-sheets/detail/monkeypox
  7. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7817217/
  8. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7356137/
  9. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8167834/
  10. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8934133/
  11. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8313503/
  12. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3744431/
  13. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8603903/#CR68
  14. https://www.nature.com/articles/s41564-022-01089-w
  15. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7656497/

The post Genetic Leapfrog: How Zoonotic Viruses Jump Species appeared first on Illinois Science Council.

What’s the Difference Between a Harmless Cosmetic Procedure and the Deadliest Poison on Earth? There Isn’t Any!

0
0

Every year, millions of Botox® procedures are done, usually in the form of non-invasive injections. In many cases, these injections are used for cosmetic purposes, such as eliminating wrinkles from the skin. Several therapeutic uses are common as well, such as treating migraines and even excessive sweating (hyperhidrosis). Obviously, if so many people do it and it has so many benefits, Botox® must be harmless, right? Well, the procedures themselves are considered to be quite safe; however, there is a dark, deadly secret hiding in each injection.

What is Botulinum toxin?

To understand what botulinum toxin (BoNT; also known as the toxin behind Botox®) is, you first must get acquainted with the culprit responsible for its production: a microorganism. Specifically, botulinum toxin is produced by Clostridium botulinum, a type of bacterium. This is a gram-positive, spore-forming, anaerobic bacterium that can be found practically anywhere in the world, living in soil as spores.

C. botulinum itself is not particularly harmful. The toxin it produces, however, is considered the deadliest natural substance on the planet. This toxin is the key player in a devastating disease known as botulism, which is fortunately not very common these days.

Botulism Disease

You may have heard of botulism before, as it is a type of poisoning resulting from improper food preservation/canning. It is also the reason why honey is labeled with a warning claiming that children under a year old cannot consume it. Today, there are three main methods of infection with C. botulinum: intestinal, foodborne, and through infected wounds.

Botulism is extremely frightening because of one attributed symptom: paralysis. Similar to its closely related cousin, Clostridium tetani, the causative agent of tetanus disease (which also causes paralysis), C. botulinum kills its victims in a drastic and efficient way. Most affected individuals that succumb to the illness die as a result of respiratory failure, as the muscles required to inhale become paralyzed. But how, exactly, does this happen?

In order for muscles to contract, they must receive a chemical signal from a nearby neuron, a cell that is part of the nervous system. This chemical signal comes in the form of a neurotransmitter called acetylcholine (ACh). ACh leaves neurons in a small, membranous pouch known as a vesicle. This vesicle is destined to make a short journey over to a muscle cell, where it opens and ACh is free to bind to receptors that trigger contraction. When botulinum toxin enters the body, it quickly attacks nerve cells. Specifically, it makes it impossible for vesicles containing ACh to form, ultimately making muscle contraction an impossible task. The result is descending, flaccid paralysis. This means the paralysis begins with the face/neck muscles and works its way down, and is characterized by drooping, unmovable muscles.

Image created on BioRender.com by author.

Today, though it is very uncommon, the disease does have an effective treatment. Because symptoms are caused by a toxin rather than an actively-dividing bacterium, antibiotics are usually not used for treatment (although they may be necessary in cases of wound botulism, in which a wound is colonized by bacteria). Instead, the treatment of choice is an antitoxin that neutralizes botulinum toxin that is actively circulating the bloodstream, still unbound. The effects of botulism are technically irreversible, but this does not mean an affected individual remains paralyzed forever. Gradually, nerves will heal from the effects of the toxin, but this is a very slow process that may take several months. Over those months, depending on how far the disease progresses, affected people most likely will need supportive therapy.

History of Botulism

The origins of the disease can be traced back to the mid 1730s when it began to be associated with the consumption of blood sausage. In fact, the name “botulism” comes from the Latin word for sausage: “bottelus.” At this time, people were not aware of the consequences of improper canning techniques or food preparation, so foodborne botulism ran rampant throughout the population.

The most recent outbreak of the disease that gained attention occurred in 2015 in Ohio at a church potluck. This outbreak, the largest in nearly 40 years, affected 29 people and resulted in one death. The culprit was improperly prepared potato salad, which served as a powerful reminder of how dangerous C. botulinum can really be.

Today, the vast majority of botulism cases are intestinal rather than foodborne. These cases are often the result of children under a year old consuming honey, which is abundant in C. botulinum spores. For this reason, intestinal botulism is also commonly known as infant botulism. Older individuals can usually eat honey without any problem, as the bacteria living in our digestive systems out-compete C. botulinum spores, which never germinate. However, since the infantile digestive system is underdeveloped, it is much easier for C. botulinum to colonize the intestines and cause disease.

Warning label on a container of honey. Photo source: The Apiarist.
What does this have to do with Botox?

Though it only takes a minuscule amount of botulinum toxin to kill a person, thousands of people get Botox® injections, which contain BoNT, annually. When used for cosmetic or therapeutic purposes, the toxin is very highly regulated through sterilization/dilution and delivered in extremely small doses. Because the toxin works by preventing nerve signals (neurotransmitters) from reaching muscles, Botox® is used to treat problems having to do with muscle contraction. This is especially helpful for people with chronic muscle pain, as it forces these muscles to relax.

Most commonly, Botox® is associated with cosmetics and the elimination of wrinkles. It works very well for dynamic wrinkles, which are caused by muscle movement, like smile lines and wrinkles in the forehead. However, the injections do not accomplish nearly as much when it comes to static wrinkles, or simply sagging skin that usually comes with age. Because the injected toxin paralyzes muscles at the injection site, like in the forehead, for example, the wrinkles associated with movement in that area will no longer appear. The effects of Botox® last a few months, but because the nerves eventually heal (as with botulism disease), injections have to be continuous for the desired result to last.

Common areas for facial Botox® injections. Photo source: Bella Donna Med Spa.

A concern that frequently comes up in the discussion about Botox® is if anyone has ever come down with botulism disease from receiving an injection. The answer is quite unclear, as marketing companies often claim that the procedure is harmless 100% of the time, save for soreness at the injection site. On the other hand, in a study done in 2018, 86 cases of botulism were identified and were attributed to Botox® injections. The observed patients presented with early symptoms of botulism disease and responded well to treatment with the BoNT antitoxin. None of the patients developed any serious complications or respiratory problems, but they were also promptly treated with the antitoxin.

The overwhelming majority of Botox® injections are indeed very safe, and cases of botulism are extremely rare in today’s time (<1,000 cases worldwide). It is important to note, however, that C. botulinum is currently considered a major bioterrorism threat because of its toxin’s deadly nature. At the very least, concern about contracting the disease from a cosmetic procedure is minimal.

The post What’s the Difference Between a Harmless Cosmetic Procedure and the Deadliest Poison on Earth? There Isn’t Any! appeared first on Illinois Science Council.

Air conditioning: a global contradiction

0
0

At the turn of the century, printing books and newspapers in the New York area during the hot and humid summers was near impossible. The pages would become warped and shriveled, and the ink would often smear. Attempting to solve this problem, the Sackett-Wilhelms Lithographing & Publishing Company hired inventor Willis Carrier in 1902 to create a device that would control the temperature and humidity of their printing plant. Carrier did just that, and since then, air conditioning has reshaped modern life. The innovation of air conditioning has made it possible to manufacture and distribute items from textiles to chocolates regardless of the local climate, and introducing the technology into homes and businesses allowed populations in the American South and West to grow rapidly. Air conditioning provides a haven during some of the hottest summer days, increases worker productivity, and even protects our health from heat-related diseases on particularly hot days.

But this technology also uses a huge amount of electricity and relies on chemicals that produce powerful greenhouse gases. Unfortunately, our efforts to stay cool have contributed to making our world hotter through climate change. But recent research can inform efforts to reduce and offset the negative impacts of air conditioning, as well as provide technological solutions to make the technology more climate-friendly.

Air conditioning is reaching new populations

In recent years, air conditioner use has been on the rise in developing countries with warm climates and growing economies. Researchers who study energy markets estimate that there is vast potential for growth in air conditioning markets in countries with huge populations in blistering hot climates. Their estimates show that places like India, China, Indonesia, Nigeria, Pakistan, and others all have as much or more heat exposure than the US. As these countries become wealthier, and electricity becomes more widely available, air conditioning will also become accessible in more homes. Each of these countries could eventually require just as much energy for air conditioning as the US.

Map shows the average cooling degree days in a year across the world from 2009–2018. Cooling degree days are a unit of measure of how many degrees hotter the temperature was above 18.3˚C per day, summed across days, and are often used to estimate cooling demand. Many developing countries that have not fully adopted air conditioning have some of the highest cooling degree days measurements, and thus high potential demand for cooling.
Source: Biardeau, Davis, and Gertler et al., Nat Sustain 2020.

Widespread access to air conditioning around the world will come with mixed blessings. On the bright side, workers are often more productive when their workplaces are at a comfortable temperature. Access to air conditioning protects against heat-related diseases, including heat stroke, heat cramps, and heat exhaustion. A study estimated that heat-related deaths have been cut by 80% in the US since 1960. Cooling indoor spaces in developing countries will save even more lives by protecting against heat-related diseases, as well as provide relief for many in the world’s hottest places as global warming makes them even hotter. But such relief will require much more electricity.

Air conditioning requires a substantial amount of electricity. A home air conditioning unit uses about 20 times the electricity of a ceiling fan. About 17% of residential electricity use in the US, and 10% of global total electricity usage goes to air conditioning. Bringing cooling technology to developing countries would lead to a huge increase in electricity consumption globally.

If electricity usage were the only issue with cooling, the problem could be addressed by sourcing more electricity from clean power sources like wind and solar. But air conditioning also depends on a major source of greenhouse gases – refrigerants.

Making refrigerants clean

Air conditioners work by alternately pressurizing and depressurizing chemicals known as refrigerants, and cycling them through thin coiled pipes to move heat out of a room, making the air inside of the room cool.

The most commonly used refrigerants are known as hydrofluorocarbons (HFCs), which become potent greenhouse gases when they escape from appliances through leaks and improper disposal. HFCs range from hundreds to thousands of times more potent than carbon dioxide in their effect on global warming.

But there are alternatives to HFCs. Ammonia and carbon dioxide can be used as refrigerants in air conditioners, and have a much lower impact on the climate. Some of the technologies that use these lower-impact refrigerants are even more energy efficient than the air conditioners commonly used now, and would use less electricity. Scientists project that phasing out HFCs could prevent around 0.1°C of warming of the earth of the projected 2°C warming by 2050.

But ammonia and carbon dioxide do have drawbacks that have prevented them from becoming the most widely used refrigerants. Ammonia is toxic and can harm or even kill people when it reaches high enough concentrations in the air. These risks can be mitigated with regular maintenance and when personnel are adequately trained, but the risks still remain. Carbon dioxide on the other hand is not toxic, at least in the amounts that it would be used as a refrigerant for air conditioning. But it could be more costly to put into practice due to its chemical properties, and still involves risks. Carbon dioxide has a higher critical pressure than HFCs or ammonia, which means higher pressure would be needed to use it in air conditioners. Air conditioners that use such high pressure could be less safe or more expensive to manufacture. However there are efforts to optimize technology using each of these refrigerants, making them more safe.

Although transitioning away from HFCs might not be trivial, it would not be the first time that a class of refrigerants has been phased out because of their harmful effects on the environment. Before HFCs were primarily used in air conditioners and other cooling technologies, chlorofluorocarbons or CFCs were most common. But CFCs were found to be depleting the ozone layer and were phased out in the early 1990s. HFCs may someday be phased out just as CFCs were.

By supporting the development and adoption of these new refrigerants, or other technologies that may come along, we can protect our climate while still holding onto the cooling technology that makes modern life possible, and may help us cope with the effects of climate change that we cannot prevent.

The post Air conditioning: a global contradiction appeared first on Illinois Science Council.

Ancient DNA Helps Us Understand Pathogens of the Past

0
0

Herpes simplex virus, the microbe that causes pesky cold sores, has been around for centuries. More than 2,500 years ago, the ancient Greek philosopher first used the word “herpes,” a term derived from the ancient Greek word meaning “to creep” or “crawl,” to describe the painful and easily spread blisters. Herpes is difficult to cure because the virus can hide away in a person’s nerve cells for a long time without causing any symptoms. Environmental and physiological triggers can cause the virus to reactivate and infect cells. The fact that humans have learned to simply coexist with the virus raises an interesting question – just how old is herpesvirus?

A team of researchers recently isolated and sequenced the genetic material of ancient herpesvirus from the teeth of humans who lived during the Bronze age, which suggests that virus existed as early as 5,000 years ago. Changes in cultural practices, such as the emergence of romantic kissing, contributed to the explosion of herpesvirus infections at the time. This study is just one of many examples of how paleomicrobiology, the study of microbes in ancient remains, provides surprising insights into the origins and evolution of infectious diseases.

Although the world is now facing the new coronavirus disease 2019 (COVID-19) pandemic, major disease outbreaks have wreaked havoc on humans for thousands of years. Infectious diseases such as black plague, cholera and flu wiped out entire communities, leaving an indelible mark on human history. In both the past and present, the emergence of infectious diseases has also been a driving force behind advances in medicine and public health. Therefore, studying the evolutionary history of human pathogens can shape global surveillance efforts to better protect human health and wellbeing.

Studying ancient pathogens has proven to be difficult for several reasons. However, the biggest obstacle is finding enough intact microbial genetic material, usually DNA, that can be isolated from ancient human remains. Ancient remains can include skeleton parts (bones and teeth), mummified soft tissue, hair, or human-associated trace fossils, the latter of which includes fecal samples or the sediment and dirt near the remains. DNA from pathogens has been successfully isolated from almost all these biological samples but often makes up a miniscule fraction of a specimen’s total DNA – sometimes less than 0.5%. To overcome this obstacle, researchers use analytical tools, such polymerase chain reaction (PCR) to amplify small quantities DNA and match the sequences to those of known pathogens. New genomic techniques such as next generation sequencing make it possible to detect DNA from both known and novel pathogens which gives us more insight into ancient human populations and the pathogens that existed in the past.

Advances in ancient DNA analysis has proven to be a powerful tool in understanding the history of infectious diseases such as Yersinia pestis, the bacteria that causes the Black Death. There have been three separate plague pandemics that originated in different geographical areas and spread across Eurasia using different routes. The first pandemic, known as the Justinian plague of 1541, originated in central Africa, and spread east to the Mediterranean region. The second and most well-known pandemic, known simply as the Black Death of 1347, spread throughout Eurasia, and is estimated to have killed approximately 25 million people in Europe alone. The third pandemic started in 1894 in Yunnan, China and spread throughout the rest of Asia and the world.

Researchers isolated ancient DNA of Yersinia pestis, the bacteria that causes the black plague, from the teeth of fourteenth century remains found in the foothills of the Tian Shan mountains in Kyrgyzstan (Image by Makalu from Pixabay).

Until recently, researchers were not sure when and where the second pandemic started. To answer this question, a team of researchers exhumed the remains from a burial site located in modern-day Kyrgyzstan believed to house victims of the fourteenth-century epidemic. The researchers sequenced the DNA that they isolated and were able to reconstruct the Yersinia pestis genome from the samples. This data provided a new geographical origin for the second plague pandemic. Discoveries like this will help steer future archaeological expeditions in the quest to trace the origins and spread of the plague.

The Yersinia pestis that exists today is not the same as it was during the past pandemics, as the pathogen has evolved over time. For example, the 700-year-old Yersinia pestis strain responsible for the Black Death pandemic is part of a lineage of Yersinia pestis strains that likely emerged 7,000 years ago. Fifty-six Yersinia pestis strains, some of which are now extinct, have been isolated over a 50-year span in present-day Kyrgyzstan alone, highlighting the complex evolutionary history of the bacteria. Evolution is an important part of pathogen biology. It is the driving force through which microbes accumulate genetic changes that help them escape the host immune response and become more effective at infecting humans. Paleomicrobiology provides a window into the past for us to understand how microbes have evolved over time and predict how they might change in the future. This information will help us be better prepared for future infectious disease outbreaks.

Paleomicrobiology is a collaborative field that combines the efforts of archaeologists, historians, and scientists to understand humans’ complex history of and relationship with infectious diseases. In the future, it will also be important to study non-infectious microbes that co-evolve with us; although these microbes typically have a positive impact on human health, they can evolve and become opportunistic pathogens under certain circumstances. Therefore, studying the evolution of seemingly harmless microbes can help predict disease emergence. As the field continues to develop, paleomicrobiology will help us better understand how microbes have and continue to impact our lives.

The post Ancient DNA Helps Us Understand Pathogens of the Past appeared first on Illinois Science Council.


Minerals: The Valuable Gifts of Nature

0
0

Early in its 4.5 billion years of history, a molten Earth began to solidify its rocky surface, creating an atmosphere, developing the blue oceans and land where huge varieties of life forms evolved over billions of years. During each development stage, the earth formed different minerals in varied environments by unique processes. Keeping this in mind, let’s learn more about minerals.

Minerals are:

  • Naturally occurring.
  • Inorganic substances [e.g. gold (Au) and coffinite (USiO4)].
  • Definite chemical composition.
  • Ordered internal structure.
  • Solid at room temperature.

Without all these features, a material is not considered a mineral. For example, oxygen and water are not minerals because they are not solids. Mercury is not a mineral because it is liquid at room temp and as a result lacks a crystal structure. Synthetic diamonds are not minerals because they are made in a lab.

Forming minerals is all about changing the physical, chemical, and biological properties of nature, and there are about 57 different processes of accomplishing this according to one study. Here are several examples of these processes:

  1. Rocks are a collection of minerals. Magma cools on or below the surface to form igneous rocks like granite which contains feldspar, quartz, and other minerals. An additional way to form rocks is to accumulate sediments, such as sand and living remains like shells, on the Earth’s surface to form sedimentary rocks like limestone which contains calcite. If granite and limestone are placed in different environmental conditions, their chemical composition and physical appearance will change and turn into gneiss and marble respectively. A phenomenon known as metamorphism occurs, forming new rocks and minerals under heat or pressure.
  2. The early life forms (and later, plants) developed photosynthesis and produced more oxygen when there was little in the Earth’s early atmosphere. As oxygen became a more common gas in the Earth’s atmosphere and oceans, oxidation events that form lots of minerals such as mansfieldite and cesanite could occur.
  3. Lightning strikes have temperatures hot enough to melt silica sand, clay, and rocks. When lightning strikes these materials, fulgurites are formed along with minerals like graphite, iron, and moissanite.
  4. Plate tectonics, the motion of the earth’s crustal layer, cause the mountains to rise, oceans to form, lands to break into multiple-sized ones, hidden metamorphic rocks to appear on the surface, and more. With these events, minerals such as trinepheline, kokchetavite, and barioperovskite are formed.
What are some uses of minerals?

Minerals are often used as raw materials to make items; they appear in almost all industries. In addition, one kind of mineral can be used in various industries. If you are reading this article from your phone, you’ll find that there are many different minerals that work together to build and operate your smartphone. You can swipe the webpage up and down thanks to a thin layer file called indium, see and hear video and audio with good quality thanks to tantalum, and if you accidentally drop your phone, then you don’t have to worry about breaking the glassy screen because its durability comes from silica and potassium. Behind the screen, there are integrated circuits, the fundamental building block of your phone, along with the battery, responsible for power storage. Both highly depend on minerals such as silicon, gold, silver, diamond, lithium, lead, nickel, and cobalt, for their relevant performances and structures. Another mineral, copper, is used in the phone’s battery. Plugging the battery into copper wires recharges it with power. But wait, where does the electricity come from? It comes from various resources like sun, wind, and water motion, but to turn any energy type into electricity, we need to use generators which are built from minerals.

From left, top to right, down: Gold, Graphite, Iron, Calcite, Lazurite, Pyrite, Silver, Forsterite, Magnetite, Hematite. Photo credit: https://www.mindat.org/

What about other industries? Cement, based on a mixture of minerals like gypsum, calcite, and magnesite, binds construction materials together. Dishes, pottery, electronics, skis, tiles, and artificial joints are made of ceramic material, which is composed of minerals like feldspar and kaolinite. Speaking about kaolinite, you maybe notice the word “eco-friendly” on the papers these day. Kaolinite provides clean, safe, smooth, glossy, and printable papers and board pulp. Calcium bentonite removes the impurities of your body when you use shampoos, toothpaste, and soaps to keep you clean. Now tell me, how many items do you use daily that contain plastics? The difference in stiffness between plastics is due to the addition of ground mica. Ground mica enhances the mechanical properties of plastics to be more solid. By the way ladies, ground mica adds pearly luster to your cosmetics. Ultimately, the beautiful appearance of some minerals can’t be ignored. Olivine, emeralds, diamonds, quartz such as citrine, garnet, and others are worn on your body as gemstone jewelry.

Many items we are using today are made of minerals. One article is not enough for presenting all of minerals’ uses, so, I am satisfied to say minerals are valuable gifts of nature that humanity has learned to harness for their benefit.


The post Minerals: The Valuable Gifts of Nature appeared first on Illinois Science Council.

Organoids: Animal-Free Alternatives in Medical Research

0
0

The use of animals in research has been a controversial topic for some time. There is still a divided opinion on animal-based research, with many people believing that animals are an ideal choice to not only enhance our understanding of complex diseases such as cancer, but to also test drugs and cosmetics. However, many people argue that using animals is unethical, cruel, and unnecessary.

In Great Britain alone, the use of animals increased by 6% in 2021. This is likely to rise as diseases are becoming more complex and the demand for ground-breaking research is constantly growing. In an ideal world, we wouldn’t have to harm animals at all for research purposes, so what other options do we currently have?

Growing cells in 3D is really, really tricky…

Human derived ‘cell lines’ are usually the easiest and most reliable option for studying human diseases or testing new drugs. They are originally derived from healthy or unhealthy tissue samples donated by consenting patients, where they are then frozen, stored, and reused over and over again for experiments. Whilst they provide a living model to experiment on, they are still far from ideal when investigating complex diseases or drug interactions. Cell lines only ever grow in a 2D layer at the bottom of a petri dish, never in 3D like an actual living organ. Because of this, they also grow in a strange arrangement – on top of each other, upside down or back to front, which is not what happens in the body. Another shortfall of these simple models is that only one cell type is growing in the dish at a given moment, meaning that they are incredibly basic, and it is difficult to draw a solid conclusion from them without repeating the same experiments on other types of cells.

Modelling a piece of living tissue or whole organ without resorting to animal work is a difficult task. Over the past decade, engineered, or plant-based scaffolds became popular amongst scientists and researchers, which can be coated in cells and studied like a living organ. Plant based scaffolds are derived from natural substances, which are engineered to interact with and promote the growth of living cells. Many natural materials have been tried and tested, but some of the most interesting ones include alginate (seaweed), starch (plants) and cellulose, which can be harvested from apples. The general idea is that these materials are sterilized, prepared and then cells can be seeded on top of them in a 2D layer. Cells are a lot happier when they can grow on a suitable surface, as opposed to a petri dish! There are several applications for this technology, such as using plant-based scaffolds to generate large numbers of cells which can be used for further experiments, like testing drugs. In addition to laboratory work, there is potential for this scaffold-plus-cell technology to be used in medicine, such as a replacement for skin grafting. Unfortunately, this type of application is still under scrutiny, as many factors need to be considered, such as allergic reaction, inflammation, and rejection. Just because a material behaves a certain way in the laboratory, it may behave totally differently once inside the body and even degrade or cause more harm than good.

If natural cells are becoming too basic to study tricky diseases, even when grown on a scaffold, is it possible to grow a functioning organ instead?

Basic infographic to show how plants can be harvested, broken down in the laboratory and coated with cells for further research. (Image generated using BioRender by author)
Organoid discovery

This is a particular question which has been on a lot of peoples’ minds for some time, and it turns out the answer is a lot more complex than we think.

At this time, methods to grow fully functioning organs from scratch in the laboratory are yet to be discovered. Instead, ‘organoids’ are becoming popular models in laboratory research. Organoids are a halfway house between cell lines and whole functioning organs; they are very small, round, and originate from induced pluripotent stem cells (iPSCs), embryonic stem cells (ESCs) and even healthy or cancerous patient-derived adult stem cells. But how did scientists figure out how to turn stem cells into organoids?

Dutch molecular geneticist and Keio Medical Science Prize winner Hans Clevers and his team at the Hubrecht Institute were the first people in the world to pioneer organoid research. In 2008, Clevers hypothesized that he could use intestinal biopsy tissue to obtain stem cells, which could then be used to make even more stem cells. What actually happened was ground-breaking. Not only did more stem cells form as he expected, but the bunch of cells in the petri dish resembled the original tissue and started to look like more like the small intestine! This was the birth of the first ever organoid.

Since this discovery, several researchers have been trialling methods developed by Clevers. Both healthy and diseased tissue can be harvested, usually by biopsy, from a variety of consenting patients who then go home after the procedure and continue with their day. This tissue is then transferred to a laboratory, where complex protocols are carried out to retrieve the stem cells and develop them into organoids.

Laboratory grown ‘personalized’ organoids

Organoids are not just limited to the intestine either, as researchers have successfully developed healthy patient-derived organoids from the stomach. These organoids can then be used in the laboratory to model the stomach lining, for example, to simulate stomach ulcers or cancer development. Unhealthy patient tissue, such as tumors, have also been transformed into patient derived tumor organoids (PDTOs), also known as tumoroids, including breast, colorectal, liver, brain, and pancreatic cancers. Tumoroids in particular are a huge breakthrough in cancer research as they allow scientists to investigate and study tumors outside of the body. For example, cell signaling and interactions with anti-cancer or chemotherapy drugs can be observed extensively, without having to experiment on the patient or replicate the disease in animals.

There are many applications and types of research which use organoids in addition to drug discovery and disease modeling. Further applications are shown in the image below.

Workflow for generating organoids and their further applications. (Image generated using BioRender by author)

Human derived organoids have opened up a world of opportunities for scientists and researchers alike. They provide an ideal model to carry out a range of experiments, without having to harm an animal. Organoids help to solve the problem associated with 2D cell lines, as they provide a variety of different cell types whilst also resembling the original tissue or organ somewhat. For example, this means that researchers do not have to test the same drug or technique on several different cell lines at any given moment in order to reach a conclusion. Organoids have also opened the door for ‘personalized medicine’– if patient-derived organoids are responding to drugs in the laboratory, does this mean these drugs can be safely given to the patient? We are still far from drawing this conclusion because organoids are still relatively simple systems, but they may replace the need for extensive and expensive clinical trials in the future.

Whilst promising, it is worth mentioning that organoids do lack many features which we take for granted in our bodies, such as a circulatory, respiratory, and immune system. Currently, the majority of organoid research does not consider these factors, which sets a limit on the conclusions that can be made by researchers. Another issue is the size of organoids, as they are usually no bigger than 1mm in diameter. Because they are so minuscule, they are fiddly to work with and require complicated growth requirements and maintenance.

Ultimately, we are still a long way from completely eradicating the use of animals in research, but organoids are an exciting prospect for the future of biological research and are definitely a step in the right direction.

The post Organoids: Animal-Free Alternatives in Medical Research appeared first on Illinois Science Council.

Houseplants Heal: The Benefits of Having and Caring for Plants in Your Home

0
0

Houseplants are often thought of as ornamental pieces to liven up a room. When I got my monstera, pathos and aloe for my apartment, aesthetics was indeed the only concern. However, it turns out that indoor plants do so much more.

Nowadays, most people spend around 90 percent of their time inside homes, offices or schools, and many newer buildings are well-insulated with poor airflow. This leads to the accumulation of pollutants including carbon dioxide, nitrogen oxides and other harmful volatile organic compounds (VOCs). At high enough concentrations, these pollutants can lead to severe symptoms and over time can cause Sick Building Syndrome (SBS). SBS describes the collective health issues that arise from exposure to an unsuitable microclimate, including congestion, headaches, drowsiness, irritability and distress.

Pulling out pollutants

Carbon dioxide is a common indicator of indoor air quality (IAQ). Plants consume carbon dioxide when they photosynthesize, so in theory, indoor plants lower carbon dioxide levels. In practice, studies show that hundreds of plants would be needed to offset the amount of carbon dioxide each person produces. Additionally, carbon dioxide consumption is light-dependent, and most indoor plants don’t get near proper light levels, further limiting plants’ carbon dioxide-reducing abilities.

While carbon dioxide levels are too high for plants to have an appreciable effect, plants have been shown to significantly reduce other less concentrated pollutants. One study showed plants significantly reduce the levels of nitrogen oxide compounds commonly released from automobiles. The study suggested that the soil microbiome of the plants is responsible for this action, though more work needs to be done to reach a consensus. This means that while the rate of pollutant reduction is relatively small, purifying occurs day and night without the need for light. This is especially effective for compounds that are dangerous in relatively small concentrations, such as nitrogen oxides and other VOCs. Additionally, airborne particulate matter, a complex mixture of solids and aerosols under ten micrometers that can irritate the lungs, is reduced about 50 percent in areas with plants.

It’s important to note that while being affordable and sustainable, using potted plants to improve IAQ is a strategy best used in conjunction with effective ventilation and air filtration; houseplants can only compensate for smaller concerns. An alternative solution to potted plants is active biofilters, also known as green walls, which combine plants and ventilation. In these setups, air is blown through a large volume of plants and growth material with beneficial bacteria to remove pollutants. Green walls are more effective than potted plants at improving IAQ but require much more setup and come at a higher cost.

Beneficial emissions

Houseplants do more than removing harmful compounds from the air to improve IAQ. Relative humidity below 30 percent can cause eye irritation, skin dryness and an increase in disease transmission. Plants increase the humidity to healthier levels (30-50 percent) through transpiration, or evaporation from leaves. This process is self-regulated, meaning plants will only release enough water to bring humidity to a suitable range, as humidity above 60 percent is also harmful.

Plants also emit non-harmful VOCs as well as essential oils with potentially immunoprotective or antiviral properties. These beneficial VOCs include flavor or scent terpenoids, aldehydes, esters and alcohols depending on the plant. These interfere with microbes at a molecular level, lowering disease transmission. Some essential oils are known to improve concentration and productivity, reduce stress, and improve mood. Together, these beneficial compounds released from indoor plants parallel the practice of forest bathing, spending time in green areas such as forests to gain health benefits, but in a more accessible manner for urban environments.

Plants’ psychological benefits

Perhaps more impactful than the improvement of IAQ are the psychological benefits of indoor plants. Just a brief five-minute visual exposure improves concentration and mental health, as shown by stress indicators such as lower heart rate variability and blood pressure. This may be due to something as simple as improved aesthetics since most plants are considered beautiful to some extent. The color green is considered calming and could have an uplifting effect. Having plants around also improves the perception of IAQ and general well-being.

Caring for your own plants provides additional benefits. The biophilia hypothesis states that humans have an inherent need to connect with nature or other living creatures. Thus, more time spent in nature increases affinity to it, leading to mindfulness. Empirical results show that more time spent caring for plants, a greater number of plants and a longer duration of experience are all correlated with improved mental health and mindfulness, which has an impact on long-term physical health. People with more plants are generally happier, less aggressive and have fewer mental disorders.

Photo: Huy Phan

Plants are effective, affordable and sustainable, which when curated well, can benefit both physical and psychological health. Peace lilies, ivy and weeping figs are some common houseplants in studies and homes that are both easy to acquire and manage. Having more plants around increases exposure to their benefits. Plants can improve air quality, which enhances concentration and productivity and limits symptoms associated with Sick Building Syndrome, and caring for plants boosts mindfulness and mental well-being.

Speaking from experience, the plants in my apartment have been a source of enjoyment and clarity. I often start with smaller plants or cuttings, and it’s wonderful to see them grow. I’ve found that it can be a fulfilling experience learning how to best care for each plant. Plants can be a great purchase or gift to decorate indoor spaces and improve our wellbeing.

**Editor’s note: If you are a pet parent considering becoming a plant parent, research which plants are safe for your current companions! For example, lilies are extremely dangerous to cats, and some holiday plants are toxic to both cats and dogs. There are many safe plants out there for your furry companions so it’s possible, with a little planning, to enjoy the benefits of both pets and plants!

The post Houseplants Heal: The Benefits of Having and Caring for Plants in Your Home appeared first on Illinois Science Council.

Can Green Supplement Powders Boost Immunity?

0
0

In recent years, green supplements have become remarkably popular in the health-and-wellness world. Green supplements are fruits and vegetables that have been dried and compacted into powder form. Otherwise known as “superfoods”, these powders are widely believed to improve gut health through digestive enzymes and probiotics, and to boost immunity through the antioxidants in vegetables and fruits. Because they are water-soluble and easy to incorporate into one’s diet, green supplement powders could be a convenient way to strengthen the immune system if effective.

During the COVID-19 pandemic, people with a wide range of health backgrounds endured severe health complications from the COVID-19 virus, especially before vaccines became widespread. Because of this, individuals across the globe gained interest in finding new holistic and convenient ways to optimize their body’s ability to fight disease. Although vitamin C is a popular supplement to strengthen immunity, a growing understanding of the relationship between food and health has spiked people’s concern for nutrition and pivoted their interest towards fruit and vegetable supplementation. Experts predict that the demand for green supplements will continue to rise in the next few years, but are these new green supplements really an effective way to boost immunity?

The Benefits

In one study in California, fruit and vegetable powder mixes significantly lowered the blood pressure of hypertensive individuals who took the supplement for 90 days. Long-term health conditions, like hypertension, weaken the immune system. Therefore, this study suggests that green supplement powders may help hypertensive individuals’ ability to fight diseases, lowering their risk from Covid-19 or other medical complications.

Another study on healthy subjects between 18 and 52 years old found that green supplements raised blood antioxidant levels into ranges associated with a lower risk of disease. Antioxidants are substances that remove potentially damaging toxins from the body, known as “free radicals”. Free radicals are produced from normal metabolic processes or exposure to UV light, cigarette smoke, and other environmental pollutants. Like sponges, antioxidants absorb these free radicals, which reduces oxidation and thus the risk of many diseases.

In the same study, researchers found that the plasma malondialdehyde levels dropped in people who took the greens periodically for 7 days. Plasma malondialdehyde is a substance involved in a process that damages cells, and reducing its levels helps cells respond to diseases more efficiently.

Together, these studies suggest that green supplements may make a meaningful difference in the health of those who take them. But are these green powders a cost-effective way to boost the immune system?

Are Green Supplements Worth It?

Purchasing green powders instead of whole fruits and vegetables may be a more convenient way of achieving a balanced diet since these powders don’t “go bad” and are easy to implement into one’s daily regimen. However, green supplements’ nutritional value may vary depending on how the fruits and vegetables are sourced, processed, and dried. Therefore, not all green supplements may effectively strengthen the immune system.

Furthermore, people with balanced diets may not need to pay extra money for these supplements. Simply incorporating a wider variety of fruits and vegetables into one’s diet can be enough to boost overall health and immunity. As such, spending money on green supplements may be redundant and, therefore, unnecessary.

Conclusions

Studies have shown that green supplement powders can improve immunity by raising antioxidant levels and lowering toxin levels in healthy individuals. Green supplements can also lower hypertensive individuals’ blood pressure, boosting their body’s ability to fight disease. However, the quality of green supplement powders may vary between brands. The best way to know whether you should implement green supplements into your diet is to consult a family doctor or nutritionist because eating whole vegetables and fruits or taking other supplements may be a more suitable option for you.

The post Can Green Supplement Powders Boost Immunity? appeared first on Illinois Science Council.

Meet Man’s Best Friend… and Hero

0
0
Coming face to face with our best friends

There’s just that something behind your canine companion’s eyes, but you can’t tell what. Dogs seem to know us well, but how well do we know the pooch perched at the foot of the bed? Canine cognition is still an emerging science, with research groups at Duke, Yale and more starting to look beyond those puppy eyes.

Along with understanding our furry friends that bring us comfort, researchers explore the cognitive processes that help dogs become our heroes, from dogs sniffing out landmines to reporting out-of-range blood sugar levels in diabetics.

Understanding cognition

To consider canine cognition, researchers applied their knowledge from working with humans. Cognition is the mental act of learning and remembering through lived experiences and thought. Perception, decision making and responsiveness are three processes under the umbrella of cognition.

Cognition allows constant intellectual growth, in humans and our animal counterparts. This learning can help us overcome emergency situations.

One study evaluated whether cognitive complexity contributes to how managers respond to emergencies. Managerial firms with cognitive complexity make judgments about their surrounding environments by adopting more nuanced perspectives. Firms with greater cognitive complexity tended to have better emergency preparedness. While this study explored collective cognition of managerial firms, a similar logic can be applied to everyone – people and animals – handling challenging, high-pressure scenarios.

Comparing infant and canine cognition

Scientifically, it’s hard to deny your canine connection. In some ways, dogs are just like us. Infants can identify human-produced cues and gestures, and so can dogs. Research shows, while nonhuman primates tend to struggle with cues, dogs can interpret human gestures to find concealed food. All dogs perform at relatively high levels with these kinds of tasks.

Even chimpanzees cannot compare to dogs and babies. Strong cooperative communication (the act of individuals collaborating with one another) indicates strong social skills. Domestication and evolution are key factors that enable household dogs to develop this level of social understanding during early development.

A service dog in their vest.

A service dog in their vest.But this doesn’t distinguish our loyal lap dogs from our fearless friends. By conducting more studies based on infant research, experts learned discriminatory (i.e. differentiating between smells) and cooperative-communication skills are among the traits that make some dogs better than others when working with humans. Some individual differences – especially communicative abilities – could be inherited within a breed.

In talks with the American Psychological Association, canine researcher Stanley Coren specified three ways dogs acquire intelligence: through instinct, adaptation and working/obedience. Instinct can be rooted in intentional breeding practices, whereas adaptation and working/obedience are learned through environmental interaction and structured experiences.

This may sound familiar because toddlers have similar cognitive processes. However, dogs’ social skills and unique sensory abilities suit them for more advanced tasks.

Getting to work in the field

Landmines – explosives on or under the ground used during war – do not automatically deactivate at the end of a conflict. While more countries move toward eliminating landmine use globally, many landmines remain in war-torn areas.

Despite our awareness of landmines, humans struggle with landmine detection. Human teams take longer to review potentially landmine-riddled plots of land, especially as metal detectors prove ineffective with sensing non-metal explosives. This is where the dogs come in.

Dogs’ discriminatory abilities allow them to more efficiently search for landmines. Canines already have strong discriminatory abilities and senses of smell due to their large olfactory – or smell – centers, 40 times the size of a human’s.

A service dog at the beach.

Dogs with heightened discriminatory abilities can more easily sense target odors. Not only can dogs identify chemicals leaking from devices, they can also overlook other scents. Varying soil types, moisture levels, microorganism presences and climate conditions contribute to background odors that complicate scent discrimination. Research shows that dogs can comfortably differentiate up to 10 odors. Considering these factors, dogs perform comparatively better than humans in uncovering landmines.

Working closer to home

While canines surpass humans with olfaction, dogs also have impressive hearing. Dogs can detect high pitches that adult humans may not hear, such as frequencies of up to 65 thousand Hertz. Along with pitch sensitivity, dogs can detect sounds with lower intensity. This wide range equips dogs to better serve deaf and hard-of-hearing humans.

Toddler-like social cognition and strong sensory sensitivity are just two components that shape service dogs. Puppy-mother attachments also correlate to performance.

A study published in 2017 analyzed puppies bred to be service dogs. Puppies exposed to more intensified maternal behaviors – like licking and certain kinds of nursing – were more likely to be released from service dog training programs. This could be due to less exposure to stress. While having too much stress inhibits performance, puppies need to understand stress to manage high-pressure tasks. The effect of maternal styles on future service animals is still contested and lacks a substantial body of literature, but multiple researchers have observed patterns in dismissals from service dog programs due to these differences.

Knowing a dog’s worth

While some dogs’ genetics, upbringings and learned skills give them advantages as military or service animals, there is more universal research supporting dogs’ benefits. Some studies suggest dogs boost human oxytocin levels and lower cortisol levels, reducing stress. While dogs can experience heightened cortisol levels from human interaction, dogs can also experience a similar oxytocin boost, encouraging greater bonds.

A dog may have the intelligence of a toddler, but dogs’ sociability and sensory perception allow them to function in vastly different capacities than humans. Even if your four-legged friend spends most of their time on long walks or cuddled up by the fireplace, the human-canine relationship is a beast in itself.

The post Meet Man’s Best Friend… and Hero appeared first on Illinois Science Council.

How Climate Change Fuels the Spread of Mosquito-Borne Diseases

0
0

With July of this year, 2023, being the hottest on Earth yet recorded, there are increasing concerns about how climate change will shape the next several decades. We often hear about how climate change will increase disastrous weather events, decimate crops, and create climate refugees. However, something often overlooked is how climate change will increase disease spread, specifically the spread of mosquito-borne diseases.

Mosquito-borne diseases are already a global health concern that results in disease and death for millions of people. Over 80% of the worldwide population is at risk of contracting a vector-borne disease, and around 700,000 deaths occur yearly from such diseases. Dengue, a virus transmitted by Aedes mosquitoes, is the leading cause of arthropod-borne viral disease. Globally, dengue infects 50-100 million people annually, causes approximately 24,000 deaths, and is estimated to cost 2.1 billion dollars annually in the Americas. The only available vaccine has limited efficacy and is currently not widely distributed. Malaria, a disease transmitted by Anopheles mosquitoes, infected 247 million people and caused 619,000 deaths in 2021. Recently, a malaria vaccine was approved; however, due to high demand and potential financial constraints, the vaccine’s reach could be limited. The global population is already heavily burdened by mosquito-borne diseases, and climate change is expected to increase this burden.

Climate change will affect mosquitos in many ways, intensifying the transmission of mosquito-borne diseases. One main factor is changing weather patterns, increasing the range and population of several mosquito species. Since mosquitoes are cold-blooded creatures, most disease-spreading mosquito species thrive in tropical or subtropical habitats, meaning they prefer warm temperatures and a wet climate. Malaria spread is already shifting to higher altitudes, and in African highlands, the climate suitability for transmission increased by about 30% between 2012 and 2017 compared to a 1950 baseline. According to the World Malaria Program, in Europe, malaria cases have risen by 62% since 2008, and cases of Zika, dengue, and chikungunya have increased by 700%. Dengue has also increased in Cambodia, Vietnam, Laos, the Philippines, Malaysia, and Singapore, according to the World Health Organization. Australia recently experienced its first outbreak of Japanese encephalitis virus, previously confined to Southeast Asia and the Pacific islands. This rise in disease cases is attributed to changing weather patterns due to climate change.

Looking to the future, dengue is expected to be transmitted in the United Kingdom by 2100. Currently, the farthest north dengue is frequently found are the southern regions of the United States.  Additionally, increases in rainfall and flooding will raise mosquito populations in some areas. The larval and pupal stages of the mosquito life cycle happen in the water, so more standing bodies of water increase mosquito breeding grounds and thus increase mosquito populations. In 2021, extreme flooding in Germany increased mosquito populations by tenfold. Conversely, droughts could increase mosquito populations in areas that traditionally have had heavy rainfall. Heavy rain has been associated with disrupting mosquito life cycles by washing out breeding sites, as mosquitoes need standing water to lay eggs. In Colombia, malaria cases increased during a drought year. Warmer temperatures throughout the year will also increase the annual transmission season, which is the amount of time during the year mosquitoes are active and able to spread disease. In the next 50 years, the transmission season for dengue is expected to increase by four months and for malaria by one month. Due to warming temperatures, 1.3 billion more people are expected to live in areas affected by Zika. In many places, temperatures will allow Zika to be transmitted year-round.

Warming temperatures can also directly impact the pathogens mosquitoes spread, as pathogen development is often temperature dependent. Modeling has shown that increasing temperatures up to 82 degrees Fahrenheit increases the potential for a dengue epidemic. Plasmodium falciparum, the major malaria species, goes through several developmental stages before it can be spread to humans. Elevated temperatures can shorten the species’ incubation period, leading to a more rapid transmission to humans. At a temperature of 77°F, the malaria parasite needs 12 days to develop, whereas, at 68°F, it takes over 30 days to complete development, although at temperatures above 104°F, the malaria parasite cannot survive. The temperature-dependent nature of pathogen development in mosquitoes emphasizes the significant role that climate change plays in shaping the transmission dynamics of mosquito-borne diseases.

Mosquito development also changes with temperature variations. Higher temperatures often reduce hatching and pupation time, causing mosquitoes to reach maturity faster. Anopheles gambiae develops from egg to pupa in 9.3 days at 95°F and 12.6 days at 77°F. However, higher temperatures also result in shorter lifespans and fewer offspring, which could reduce populations in some areas. Additionally, more moderate winter temperatures could increase populations of mosquitoes such as Culex, a vector for West Nile virus. Culex enters diapause each year during colder months, a sort of hibernation where the mosquito will not breed or take blood meals. Freezing winter temperatures kill some diapausing Culex reducing populations for the spring. Without harsh winter conditions, Culex populations could surge. However, mosquito-borne diseases spread in temperate areas, such as West Nile virus, may shift to cooler areas, as Culex cannot survive in scorching summer temperatures. While some areas that experience new record high temperatures may see a reduction in some mosquito-borne disease spread, these diseases are expected to move to cooler regions. In addition, intense and long dry seasons can cause mosquitoes to develop a preference for human hosts, likely due to a mosquito’s dependence on human-provided water sources. Surprisingly, temperature changes also affect mosquito-biting behavior. Rising temperatures increase the digestion time of a blood meal, causing females to take more frequent blood meals and giving them more opportunities to pass diseases to humans.

Although the relationship between the factors causing the spread of mosquito-borne diseases is complex, it is clear that climate change will have profound implications for their spread. Urgent action is needed to mitigate the impacts of climate change. Current mosquito control methods, such as insecticides and land management, as well as improving healthcare infrastructure and new development of vaccines and treatments, will be vital in reducing future disease spread. Reducing mosquito-borne diseases requires a diverse approach combining climate change mitigation efforts with comprehensive disease prevention and control strategies. Only time will tell how climate change will affect mosquito-borne disease spread, but it is important to recognize that proactive measures are essential to reduce future impacts.

The post How Climate Change Fuels the Spread of Mosquito-Borne Diseases appeared first on Illinois Science Council.

The Future of Nuclear Power in the United States

0
0

Energy is one of the most important resources for humanity in the 21st century, and electricity is the most common form of energy.[1] The primary sources of electricity generation in the US include, but are not limited to, natural gas, nuclear fission, solar and wind. Coal used to be the leading source of energy until the Regional Transmission Authorities (RTOs) or the energy market managers in the US slowly phased out coal as a fuel due to strict environmental regulations because of its release of carbon dioxide, methane, nitrogen oxides, and other pollutants. The negative impact of fossil fuels and the rise of renewables like solar, wind and biofuels have been making the news in the past decade, while nuclear power has somehow slipped through the cracks of public debate. There is, quite rightly, a fear associated with the word “nuclear” but nuclear fission reactors have been silently powering a fifth of the US power grid for decades now. Notably, only two water-cooled Generation 3 reactors have been added to the US grid in the past three decades[2] and there are no plans to add any more of them in the future. Currently only 20% of electric power in the US comes from nuclear fission, while in France, its share is about 70%, and other nations are rapidly shifting towards nuclear power. This article will attempt to shed some light on the economics of nuclear fission power generation in the United States and how it is reshaping the future of energy markets and innovation.

Nuclear Power Economics

Because of its availability, natural gas surpassed coal in 2016 to become the cheapest electricity generating fuel and is currently the largest source of power in the US. Depending on the nature of the fuel, some power sources can qualify as “baseload” sources (sources that can generate electricity at all hours acting as a constant source)[3], like natural gas and nuclear, while other sources like solar and wind are “intermittent” sources at best, like solar and wind, due to the latter not being reliable. For instance, solar power would not be our best hope on a cloudy day. Both natural gas and nuclear fission are the main sources of baseload power here in the US, but the latter is economically unsustainable in its current form.[4] This article is aimed at examining the approximate costs, timescales and the pros and cons of nuclear power and how these factors affect its future in the US energy economy.

Currently, the cost of building a nuclear power plant is about five times as much as a natural gas plant, but the cost of fuel of the former is 6-7 times cheaper than natural gas. The construction time of a nuclear plant can be estimated to be around 5 years, as opposed to 2 years for natural gas. Utility companies are not allowed to charge customers for a power plant before it produces electricity. Therefore, a nuclear plant is in a much bigger cash deficit when it starts producing power 5 years after its inception, as compared to its natural gas counterpart which starts producing power 2 years after. This results in a fission plant becoming profitable after almost 11 years as opposed to about 5 years for a natural gas plant. However, in the long run (20-30 years), nuclear power plants profit significantly more than their natural gas counterparts. The operating costs of nuclear plants being lower than its natural gas cousins even accelerate the profitability of nuclear power. This delay in profit for nuclear power, however, discourages politicians and industrialists from building them. The infamy of nuclear weapons and the Chernobyl/Fukushima reactor catastrophes[5] have led to policies (like high taxes) that discourage the building of new fission reactors, making it politically unpopular. The government has created policies on regulating the energy markets to favor the survival of nuclear power, an already economically disfavored source of energy. All these factors indicate an unfavorable market for nuclear power when overall energy prices are low. However, the Russian invasion of Ukraine has shown the world how uncertain supply chains can be (specifically for natural gas since Russia is a major supplier) thereby bringing nuclear power back into the public debate. And the cost to society of climate change due to greenhouse gases from natural gas should make the United States reevaluate its policy on nuclear power.

Is There a Way to Sustain Nuclear Power Generation in the US?

Several issues underlie the use of nuclear power, such as the management of nuclear waste and the risk of nuclear weapons proliferation. Still, it is in the interest of the world to find solutions to these problems that allow nuclear power plants to stay afloat. Several companies like NuScale, TerraPower and Westinghouse are already developing modular atomic reactors with generation capacities of several hundreds of MWe (MegaWatt electric is the output power generated from a power plant). These Small Modular Reactors (SMRs) have several key advantages:

  • They have the promise of being manufactured in a factory setting and transported/set up at the site directly, saving hundreds of millions of dollars in site construction costs.
  • They take less time to be manufactured, which means less interest incurred on loans, thereby reduced overall cost that is charged to the customer over time.
  • They generate around a third of a conventional Generation 3 reactor unit, but their small size makes it easy to be installed in geographical locations previously inaccessible to large-scale nuclear projects.

Illustration comparing a traditional nuclear power plant with a small modular reactor.

(image courtesy: Idaho National Laboratory)

Small Modular Reactors (SMRs) can be the next leap in nuclear power

In addition to the points mentioned above, the modularity aspect of SMRs allows it to be bought pre-assembled from factories and installed directly on-site, which saves time (equivalent to money in this case) and construction costs.

Because of the economic viability of this technology, we can expect new modular fission reactors to be added to the grid thereby increasing its share in the US baseload clean energy capacity. Government regulation on nuclear energy is still largely about controlling the market as well as having stringent safety protocols in terms of operation and waste disposal. SMRs have the potential to be economically competitive with natural gas or other renewable power sources. This means nuclear power under SMRs will not need the government’s help (of regulating the energy markets) to ensure its survival. However, regulations regarding nuclear waste disposal and nuclear safety will need significant revisions. This is because SMRs can generate power with much less enriched 235U fuel, thereby generating far less harmful nuclear waste. These advantages will simplify regulations enforced by the US Nuclear Regulatory Commission (NRC) in terms of safety and nuclear waste disposal. This new technology does not hold the answer to efficient nuclear waste disposal and separate research efforts and investments are still necessary in that direction.

Nuclear power policy heavily depends on its unique scientific properties and the economics that manifest as a result of those properties. This sometimes makes it even more controversial than greenhouse-gas sources of energy. Therefore, this article can be useful to gain some understanding on where we stand on the nuclear power debate and what its future looks like. Most SMR innovation companies are based in the US but also have industrial setups overseas, like NuScale in Poland. Although this article does not touch upon international nuclear regulations, it needs extensive revision because the need for extensive nuclear power generation needs to outweigh the fear of nuclear weapons, which can only come through significant international collaboration. Thus, nuclear fission science and policy is extensive and far-reaching and will need more and more human effort for our life to sustainably exist.

[1] “Factbox: Power Outages Hit More than 500,000 in the U.S. Due to Storms | Reuters.”

[2] Power, “Vogtle 3 & 4 Nuclear Units Take Significant Steps toward Operations.”

[3] “Baseload Power – Energy Education.”

[4] beyondnuclearinternational, “Nuclear Too Expensive and Not Needed.”

[5] Konoplev, “Fukushima and Chernobyl.”

The post The Future of Nuclear Power in the United States appeared first on Illinois Science Council.


“Our Once and Future Wetlands: My Experience as The Artist-In-Residence with The Wetlands Initiative”

0
0
Artist's design of a "floral collar" inspired by work on wetlands restoration using stitched thread and textiles on linen fabric.

Calumet Region III: Symbolic representation of the hemi-marsh and fragmented landscape
Cotton thread, cotton ground, collaged fabric
Mounted on linen

 

Science Art exists on a continuum. At one end of the spectrum is scientific illustration. This is art in the service of science used to teach concepts or visualize big ideas. At the other end is art inspired by science: plenty of art flash but short on science. As a 10-year practitioner of science art, and a former science avoider, I aim for the middle of this continuum: using my toolkit as a fine artist, I want to communicate accurate scientific concepts and to express a human connection, knowing that I have about three seconds to capture someone’s attention. I do this by implementing control of texture, color, line quality, value (light and dark), symmetry or asymmetry, space, shape and form. As the first artist-in-residence for the Wetlands Initiative (TWI) my art centers on creating artwork that highlights the achievements, techniques, and challenges that their ecologists face in restoring the Midwest’s lost wetlands.

In particular, I have focused my attention on TWI’s restoration efforts in the Calumet region, a complex of wetlands at the southern end of Lake Michigan stretching across Chicago’s Southeast Side and Northwest Indiana. The Industrial Age led to vast and rapid filling and altering of that landscape. Today the Calumet region is scattered with remnant pieces of its rich wetlands heritage amid the remnants of heavy industry and a busy transportation corridor.

Photo of pile of garbage in front of chain link fence in field with brown grass with water in background.

Indian Ridge Marsh Before Restoration
Photo Credit Gary Sullivan

 

This area, with its huge challenges and equally large promise, inspired me to begin my work here. It was a huge creative and intellectual challenge to figure out how to express the rich combination of art, ecology and engineering and to capture the realities of carrying out conservation work on land situated in the Rust Belt. TWI, in partnership with other organizations, has achieved huge gains in healing some of these damaged ecosystems and returning them to welcoming, publicly accessible natural areas that support rare birds and wildlife. 

Photo of author and researcher wearing standing in marsh holding a water gauge

Installing Water Gauge; the artist/author and Harry Kuttner
Photo Credit Gary Sullivan

 

On my first day working as TWI’s resident artist, I found myself in the bright June sunlight, standing ankle-deep in mud and helping to install water gages with the crew.  I met the TWI team at their site along the West Branch of the Little Calumet River floodplain corridor. Restoration begins with bringing back a more-natural hydrology.

I listened to the discussions between Dr. Gary Sullivan, TWI’s senior ecologist; Harry Kuttner, Calumet coordinator; and Jim Monchak, Geospatial Analyst. They were discussing last season’s eradication of an aggressive invasive species of reed grass and the coming season’s plans for planting plugs (seedlings) of native species. In the background the drone of the expressway blended with bird calls and buzzing pollinators. 

This place was alive with energy and not just the kind pulsating through the Com Ed pilons above our heads. At my feet, were native plants: rattlesnake master, hybrid cattail, hard-stem bulrush, pickerel weed, and sweet flag to only name a few. All are thriving in the space created by the removal of invasive plants the previous fall. Having nearly fallen in the muck that morning, I also realized I was falling hard for wetland field work in general and this region in particular. 

Image of researcher Gary Sullivan’s Hemi-Marsh map with multiple squares of different-colored green squares of fabric.

Gary Sullivan’s Hemi-Marsh Map

 

Working alongside the crew that day, I learned about the importance of hemi-marsh habitat (a marsh approximately equal parts open water and emergent vegetation) and how the parcels of land in the region are no longer connected by the original hydrology that once dominated this area. TWI works to re-sculpt the land (literally with bulldozers!) and installs water control structures to mimic the natural cycles of ebb and flow in these isolated parcels of wetlands. The restoration process is very much a partnership between a deep understanding of wetland ecology and engineering prowess. When these processes are returned, nature takes over and the wetlands begin to function as they were intended . . . to clean water and support a diverse group of plants, insects, animals, birds, and even microbes unique to this region.

Photo of Muskrat swimming in water with reeds

Muskrat
Photo Credit Gary Sullivan

 

Once the right water conditions are restored, the hemi-marsh is created primarily by the activity of muskrats. They do so by cutting down marsh vegetation in characteristic patterns to create their homes and feeding “platforms” within the aquatic landscape. They are the hemi-marsh architects. Beavers also create deeper channels for traveling, sometimes cutting them across land to create canals to a food or building resource. 

Photo is author's notebook with sketches surrounded by books about wetlands

Combining science research and generating visual ideas

Photo of King Tut floral collar from the MET Museum

King Tut collar from the MET Museum

 

Early in my visual research exploring the online collection at the Metropolitan Museum of Art, I stumbled upon a 3000-year-old floral collar that was buried with King Tut. I was familiar with ancient Egyptian depictions of bejeweled collars on images of kings and queens. What I did not know is that royalty was buried with elaborately fashioned fresh flower collars. We are not the only people in history to use fresh flowers to honor our dead.

Egyptian tomb builders supplied all earthly tools and supplies, assuming the afterlife would mimic life on earth. This collar was a representation of the birth, death, and rebirth of the recently deceased. The format of my textile collars represents two symbolic ideas: the idea that wetland restoration is also a form of birth, death, and rebirth. And the collar (a broken circle) is also a stand-in for the ways we humans have broken the cycles of nature.

The other intriguing connection to the project is that the funeral collars are, in intent at least, a garment intended to be worn. I used this symbolism to suggest a very personal connection between viewers and wetland restoration.

Artist's design of a "floral collar" inspired by work on wetlands restoration using stitched thread and textiles on linen fabric.

Calumet Region I: Textile collar representing the Calumet hemi-marsh
Cotton thread, cotton ground, collaged fabric
Mounted on linen

 

The central theme in this piece (Calumet Region I) is the creation of hemi-marsh conditions. The different depths found in Gary’s map are the foundation of the piece. I’ve stitched flora commonly found in a restored hemi-marsh:  American lotus, sneezeweed, and marsh marigold. The small squares in the art piece represent the idea that the remnant wetlands are scattered and disconnected. In the Calumet region, untangling ownership of these parcels is complex, requiring patience and persistence. As work continues, secretive marsh birds like the Illinois State-endangered Common Gallinule and Black-crowned Night-Heron, pollinators and other insects, amphibians and fish are beginning to establish themselves in the region.  

Photo of a Juvenile Black-Crowned Night-Heron bird resting on a reed above green marsh water

Photo of a Juvenile Black-Crowned Night-Heron bird resting on a reed above green marsh water

 

Three people wearing hip waders standing in green marsh water installing a measuring pole.

Slide of hip waders
Photo Credit Lindsay Olson

 

Artist's design of a "floral collar" inspired by work on wetlands restoration using stitched thread and textiles on linen fabric,

Calumet II
Cotton thread, cotton ground, collaged fabric
Mounted on linen

 

In this piece (Calumet II), I used the radiating lines to express the divided parcels of land and the many expressways and levees that crisscross the region. There is a suggestion of the work of beavers and muskrats as well. The neckline of the piece is a nod to the water control structures installed by TWI that will allow site managers to raise and lower water levels, mimicking natural water fluctuations. This fluctuation is needed to sustain healthy, functional hemi-marsh

My intention is not to create a photo-realistic portrait of a restored wetland but to find ways to create an impression of the process of restoration within an urban, industrial setting, including the manmade elements like roads that surround and sometimes cut through wetland areas. I wanted the work to look deliberately “designed” and to quietly express the restoration efforts. I’ve also chosen muted colors in the artwork instead of the often-flamboyant colors of flora in a restored wetland. Wetland restoration takes decades of slow, incremental work to achieve results that may not be fully realized in one person’s lifetime.

Photo of green field of plants with yellow flowers with rail cars visible in background

Indian Ridge Marsh
Photo credit: Lindsay Olson

 

Working in an area of deep human disturbance, TWI is restoring a wetland over a layer of slag, the industrial waste from steel manufacturing and a material widely found in this area.

Even though this project is in its early stages, this stunningly original project is yielding a significant change in plant and animal diversity:  secretive marsh birds, pollinators and plants are returning to the land. 

Photo of lush green marsh area with many plants and water plants

Dixon Waterfowl Refuge
An example of decades of restoration efforts
Photo Credit Lindsay Olson

 

To create a project, I construct a deliberately uncomfortable place myself. I entered the residency with TWI not knowing anything about the process of wetland restoration. With the generous help of their staff, I have been able to participate in and witness the creation of functional wetlands clawed back from formerly degraded landscapes. The Calumet region is not just an industrial landscape from here to there. I’ve seen what happens when people implement big dreams and join forces. I am deeply grateful for this opportunity to work with TWI and use my art to help spread the word about how they are reclaiming these parcels of land and to help others connect with this hopeful work.

 

Website:  Lindsayolsonart.com

Instagram:  @lindsayolson816

The post “Our Once and Future Wetlands: My Experience as The Artist-In-Residence with The Wetlands Initiative” appeared first on Illinois Science Council.

Total Solar Eclipse on April 8, 2024

0
0

On April 8th, 2024, a total solar eclipse will sweep across North America, from Mexico to the Maine-Canadian border. For those who experienced the spectacular solar eclipse of 2017, this one will be similar, crossing the United States from west to east and passing through or near several major metropolitan areas. While its path is quite different this time, Carbondale, Illinois, a reasonable destination for Chicago-area residents, will once again be on the line of totality.    

Just a little background on eclipses:  Lunar and solar eclipses are not uncommon – they each occur about twice a year when the moon is crossing the ecliptic, the path of the sun in the sky. When the moon is new, we experience a solar eclipse; when it is full, we see a lunar eclipse. For a lunar eclipse, much of the Earth will see totality, assuming clear skies, and totality typically lasts 30 minutes to an hour. But for a solar eclipse, totality is only visible for a narrow band about 70 miles wide and will last for only minutes. Totality can last as long as 7 minutes, but the 2024 solar eclipse will be a little over 4 minutes for much of North America. While the moon will completely cover the sun in the Carbondale area, people viewing from Chicago will see only about 92% of the sun covered by the moon. Cool, but a long way from the awesome experience of totality.  

The 2024 solar eclipse will begin on the continent at 11:51 AM Mexico Daylight Time in Mazatlán, Mexico. The shadow will cross the Rio Grande at 12:10 CDT, where San Antonio, Austin, Fort Worth, and Dallas are all in its path. The length of totality will depend on how near you actually are to the centerline of the shade, and the difference can be minutes of totality. From Texas, the eclipse will move up through Oklahoma, Arkansas, Illinois, Kentucky, Ohio, Pennsylvania, New York, Vermont, New Hampshire, and Maine. Along the way, Indianapolis, Cleveland, Buffalo, and Rochester are near the centerline. You can see the path and length of totality using this interactive link.

Path-of-solar-eclipse-across-North-America-April-8-2024

Credit: Esri, TomTom, Garmin, FAO, NOAA, USGS, EPA, USFWS

So where should you plan your viewing? As a Chicagoan, I watched from a wonderful location near Carbondale in August 2017. However, skies can be cloudy in the Midwest during April, so I am planning on visiting friends in Austin, Texas who have property near the centerline – balancing greater travel distance against the probability of clear skies. Of course, probability is not certainty or even likelihood. Carbondale could be sunny and clear. But wherever you decide to go, you should start to plan lodging soon as accommodations are already filling up.

What can you expect to see?

In the line of totality, you can observe changes as the moon begins to move across the disc of the sun. Slowly the sky and the Earth will darken until it seems dusk is setting in. Do you notice a change in animals, birds halting their songs, cows headed home? How do the shadows around you change? Do colors appear different? Hundreds of tiny eclipsed suns may appear beneath the trees.

In Carbondale, Illinois, first contact of the moon with the sun will be at around 12:45 PM. With safe viewing devices, you can watch the moon cover more and more of the sun, until totality begins at about 2:01 PM. During totality, you can view the sun safely with the naked eye. It is not the intensity of the sun that can damage the eyes but the ultraviolet and infrared radiation emitted by the sun; during totality, the sun’s photosphere is completely covered and only the corona, the sun’s gassy “atmosphere” is visible. In Carbondale, totality will last for about 4 minutes and 10 seconds, although this length will vary across the continent with the longest times in Mexico and Texas and shortest as the eclipse moves towards the northeast. Following totality, the eclipse will play reverse, until final contact at about 3:16 PM in Carbondale, about 2½ hours in all.

If you are lucky enough to be in the path of totality, are there things for which to watch? Do any stars or planets come into view as the sky darkens, especially near totality? For this eclipse, all seven planets will be lined up along the ecliptic near the sun, although Uranus and Neptune will certainly be too dim. (Still this will be a chance to see all of the seven wandering stars identified by the ancients and which named the days of our week. Two of them, the sun and moon will be easy, but bright Venus and Jupiter should be easy to find.)

As the sun moves into totality, there are three phenomena worth noting:

  • Bailey’s Bead: As the moon covers the sun, the uneven surface of the moon, mountains and valleys, allows bright beads of light to appear along the edges.
  • The Diamond Ring effect is related to Bailey’s Beads, and is a flash along the edge when only one bright bead remains. The flash is like a bright diamond set on the thin ring of the sun.  
  • The corona:  With the sun completely covered, the eclipse can be observed without glasses or other devices. The sun’s corona is its upper atmosphere and is observable to the naked eye only during totality when the sun’s surface, the photosphere, is completely hidden. Prior to our space telescopes, solar eclipses were the only time astronomers could actually study the sun’s atmosphere.

What colors do you see as the sun eclipses? How does the darkness of the sky vary from the eclipsed sun to the horizon? Of course, it is likely that you will be so transfixed by the magic of brief totality that these observations can be easily forgotten in the brief 4 minutes of totality. 

Photo of the diamond ring effect of a solar eclipse

Credit Ashley Marando NASA

Viewing devices

Before totality, eclipse glasses are probably the easiest way to safely observe the sun. You can watch the sun from first to last contact safely using them; in fact, you can look at the sun anytime with these glasses, even seeing large sunspots. Don’t wait until next spring to look for eclipse glasses – like lodging, they will be difficult to obtain as we near the eclipse. And make certain you purchase them from a reputable source. They should have a rating of ISO12312-2 to be safe. Observing the sun through sunglasses, film negatives, or the reflection of a mirror or water in a bucket is NOT safe.

Another safe method is to use welder’s glasses of level 14 or darker, if you have them.   

Another fun and inexpensive method for observing the eclipse is to make a pinhole projector or a solar projector using binoculars or a telescope on a stand. If you have a telescope, you can purchase a solar filter (but remember, do not look through the viewfinder to find the sun!)

Oh, and another cool way to observe the sun is with a colander or beneath a tree that will project dozens of eclipsing suns on the ground.

During the four minutes or so of totality, you can put all of these viewing aids away and enjoy the wonder of totality, the blackened sun hanging 64 degrees above the southern horizon in Carbondale will be safe to view. However, if you are in Chicago, or another location outside of totality, you must protect your eyes the entire time.

Hopefully the 2024 solar eclipse has been on your calendar and you have already made plans. If not, now is the time. If you miss this eclipse, the next total eclipse to cross the United States will be in August, 2045, with a robust 6 minutes of totality. You may want to mark that on your calendar now – 20 years pass quickly! The rooms may already be filling up.

 

The post Total Solar Eclipse on April 8, 2024 appeared first on Illinois Science Council.





Latest Images