Vocab v49

¡Supera tus tareas y exámenes ahora con Quizwiz!

Epistemology

Epistemological problems are concerned with the nature, scope and limitations of knowledge. Epistemology may also be described as the study of knowledge.

inertial frame of reference

a state of motion that is experiencing no acceleration

complicit

being an accomplice in a wrongful act involved with others in an illegal activity or wrongdoing. "all of these people are complicit in some criminal conspiracy"

Phylogenetic tree

A phylogenetic tree or evolutionary tree is a branching diagram or "tree" showing the evolutionary relationships among various biological species or other entities—their phylogeny —based upon similarities and differences in their physical or genetic characteristics.

hepatitis

Hepatitis is an inflammation of the liver. Viruses cause most cases of hepatitis. The type of hepatitis is named for the virus that causes it; for example, hepatitis A, hepatitis B or hepatitis C. Drug or alcohol use can also cause hepatitis. In other cases, your body mistakenly attacks healthy cells in the liver.

Hyper-Kamiokande

Hyper-Kamiokande is a neutrino observatory being constructed on the site of the Kamioka Observatory, near Kamioka, Japan. This project started in 2010 as a successor to Super-Kamiokande. It was ranked as among the 28 top priority projects of the Japanese government. Thirteen countries from three continents are involved in this program. Construction was given final approval on 13 December 2019[2] and is scheduled to start in April 2020.[3] The begin of data-taking is planned for 2027. Hyper-Kamiokande will have a tank with a billion litres of ultrapure water (UPW), 20 times larger than the tank for Super-Kamiokande. This increased capacity will be accompanied by a proportional growth in the number of sensors. The tank for Hyper-Kamiokande will be a double cylinder 2 × 250 meters long, always approximately 40 × 40 meters, and buried 650 meters deep[5] to reduce interference from cosmic radiation. Among the scientific objectives will be the search for proton decays.

Why did NASA invent memory foam?

Memory foam, also known as temper foam, was developed under a NASA contract in the 1970s that set out to improve seat cushioning and crash protection for airline pilots and passengers. Memory foam has widespread commercial applications, in addition to the popular mattresses and pillows

Oxalate

Oxalates are a natural substance in many foods. They bind to calcium during digestion in the stomach and intestines and leave the body in stool. Oxalate that is not bound to calcium travels as a waste product from the blood to the kidneys where it leaves the body in the urine. Many metal ions form insoluble precipitates with oxalate, a prominent example being calcium oxalate, the primary constituent of the most common kind of kidney stones. C2O4(2−)

Sonoluminescence

Sonoluminescence is the emission of short bursts of light from imploding bubbles in a liquid when excited by sound. Sonoluminescence is a phenomenon that occurs when a small gas bubble is acoustically suspended and periodically driven in a liquid solution at ultrasonic frequencies, resulting in bubble collapse, cavitation, and light emission.

Upper critical solution temperature

The upper critical solution temperature (UCST) or upper consolute temperature is the critical temperature above which the components of a mixture are miscible in all proportions.[1] The word upper indicates that the UCST is an upper bound to a temperature range of partial miscibility, or miscibility for certain compositions only. For example, hexane-nitrobenzene mixtures have a UCST of 19 °C, so that these two substances are miscible in all proportions above 19 °C but not at lower temperatures.[2] Examples at higher temperatures are the aniline-water system at 168 °C (at pressures high enough for liquid water to exist at that temperature),[3] and the lead-zinc system at 798 °C (a temperature where both metals are liquid).[4]

Vestibular system

The vestibular system is a sensory system that is responsible for providing our brain with information about motion, head position, and spatial orientation; it also is involved with motor functions that allow us to keep our balance, stabilize our head and body during movement, and maintain posture.

whale poop

Whale feces, the excrement of whales, has a significant role in the ecology of the oceans, and whales have been referred to as "marine ecosystem engineers"

X-ray notation

X-ray notation is a method of labeling atomic orbitals that grew out of X-ray science. Also known as IUPAC notation, it was adopted by the International Union of Pure and Applied Chemistry in 1991 as a simplification of the older Siegbahn notation.[1] In X-ray notation, every principal quantum number is given a letter associated with it. In many areas of physics and chemistry, atomic orbitals are described with spectroscopic notation (1s, 2s, 2p, 3s, 3p, etc.), but the more traditional X-ray notation is still used with most X-ray spectroscopy techniques including AES and XPS.

Chimera

a fantasy; a horrible creature of the imagination a thing that is hoped or wished for but in fact is illusory or impossible to achieve. (in Greek mythology) a fire-breathing female monster with a lion's head, a goat's body, and a serpent's tail.

proboscis

a long flexible snout as of an elephant A proboscis is an elongated appendage from the head of an animal, either a vertebrate or an invertebrate. In invertebrates, the term usually refers to tubular mouthparts used for feeding and sucking. In vertebrates, a proboscis is an elongated nose or snout.

Schwarzschild radius

a measure of the size of the event horizon of a black hole

post-secondary education

any education or training following high school Tertiary education

catabolic

breaking down A process in which large molecules are broken down

Catheter ablation

brief delivery of radiofrequency energy to destroy areas of heart tissue that may be causing arrhythmias Catheter ablation is a procedure used to remove or terminate a faulty electrical pathway from sections of the hearts of those who are prone to developing cardiac arrhythmias such as atrial fibrillation, atrial flutter, supraventricular tachycardias and Wolff-Parkinson-White syndrome.

Curie temperature

magnetism disappears The critical point where a material's intrinsic magnetic alignment changes direction. In physics and materials science, the Curie temperature (TC), or Curie point, is the temperature above which certain materials lose their permanent magnetic properties, which can (in most cases) be replaced by induced magnetism. The Curie temperature is named after Pierre Curie, who showed that magnetism was lost at a critical temperature. The force of magnetism is determined by the magnetic moment, a dipole moment within an atom which originates from the angular momentum and spin of electrons. Materials have different structures of intrinsic magnetic moments that depend on temperature; the Curie temperature is the critical point at which a material's intrinsic magnetic moments change direction. Permanent magnetism is caused by the alignment of magnetic moments and induced magnetism is created when disordered magnetic moments are forced to align in an applied magnetic field. For example, the ordered magnetic moments (ferromagnetic, Figure 1) change and become disordered (paramagnetic, Figure 2) at the Curie temperature. Higher temperatures make magnets weaker, as spontaneous magnetism only occurs below the Curie temperature. Magnetic susceptibility above the Curie temperature can be calculated from the Curie-Weiss law, which is derived from Curie's law. In analogy to ferromagnetic and paramagnetic materials, the Curie temperature can also be used to describe the phase transition between ferroelectricity and paraelectricity. In this context, the order parameter is the electric polarization that goes from a finite value to zero when the temperature is increased above the Curie temperature.

geodesic

relating to or denoting the shortest possible line between two points on a sphere or other curved surface. just think of longitude lines on a globe

Saturation vapor density

the maximum amount of water vapor that air can hold at a given temperature Saturation vapor density (SVD) is a concept closely tied with saturation vapor pressure (SVP). It can be used to calculate exact quantity of water vapor in the air from a relative humidity (RH = % local air humidity measured / local total air humidity possible ) Given an RH percentage, the density of water in the air is given by RH × SVD = Actual Vapor Density. Alternatively, RH can be found by RH = Actual Vapor Density ∕ SVD. As relative humidity is a dimensionless quantity (often expressed in terms of a percentage), vapor density can be stated in units of grams or kilograms per cubic meter.

Adrenochrome

(n) a red-colored mixture of quinones derived from epinephrine by oxidation Adrenochrome is a chemical compound with the molecular formula C9H9NO3 produced by the oxidation of adrenaline (epinephrine). The derivative carbazochrome is a hemostatic medication. Despite a similarity in chemical names, it is unrelated to chrome or chromium.

Tidal bore

A high, often breaking wave generated by a tide crest that advances rapidly up an estuary or river.

Centauro event

A Centauro event is a kind of anomalous event observed in cosmic-ray detectors since 1972. They are so named because their shape resembles that of a centaur: i.e., highly asymmetric. If some versions of string theory are correct, then high-energy cosmic rays could create black holes when they collide with molecules in the Earth's atmosphere. These black holes would be tiny, with a mass of around 10 micrograms. They would also be unstable enough to explode in a burst of particles within around 10^−27 seconds.

Charge-coupled device

A charge-coupled device (CCD) is a device for the movement of electrical charge, usually from within the device to an area where the charge can be manipulated, such as conversion into a digital value. This is achieved by "shifting" the signals between stages within the device one at a time. CCDs move charge between capacitive bins in the device, with the shift allowing for the transfer of charge between bins. CCD is a major technology for digital imaging. In a CCD image sensor, pixels are represented by p-doped metal-oxide-semiconductor (MOS) capacitors. These MOS capacitors, the basic building blocks of a CCD,[1] are biased above the threshold for inversion when image acquisition begins, allowing the conversion of incoming photons into electron charges at the semiconductor-oxide interface; the CCD is then used to read out these charges. Although CCDs are not the only technology to allow for light detection, CCD image sensors are widely used in professional, medical, and scientific applications where high-quality image data are required. In applications with less exacting quality demands, such as consumer and professional digital cameras, active pixel sensors, also known as CMOS sensors (complementary MOS sensors), are generally used. However, the large quality advantage CCDs enjoyed early on has narrowed over time. In a CCD for capturing images, there is a photoactive region (an epitaxial layer of silicon), and a transmission region made out of a shift register (the CCD, properly speaking). An image is projected through a lens onto the capacitor array (the photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. A one-dimensional array, used in line-scan cameras, captures a single slice of the image, whereas a two-dimensional array, used in video and still cameras, captures a two-dimensional picture corresponding to the scene projected onto the focal plane of the sensor. Once the array has been exposed to the image, a control circuit causes each capacitor to transfer its contents to its neighbor (operating as a shift register). The last capacitor in the array dumps its charge into a charge amplifier, which converts the charge into a voltage. By repeating this process, the controlling circuit converts the entire contents of the array in the semiconductor to a sequence of voltages. In a digital device, these voltages are then sampled, digitized, and usually stored in memory; in an analog device (such as an analog video camera), they are processed into a continuous analog signal (e.g. by feeding the output of the charge amplifier into a low-pass filter), which is then processed and fed out to other circuits for transmission, recording, or other processing. Before the MOS capacitors are exposed to light, they are biased into the depletion region; in n-channel CCDs, the silicon under the bias gate is slightly p-doped or intrinsic. The gate is then biased at a positive potential, above the threshold for strong inversion, which will eventually result in the creation of a n channel below the gate as in a MOSFET. However, it takes time to reach this thermal equilibrium: up to hours in high-end scientific cameras cooled at low temperature.[22] Initially after biasing, the holes are pushed far into the substrate, and no mobile electrons are at or near the surface; the CCD thus operates in a non-equilibrium state called deep depletion.[23] Then, when electron-hole pairs are generated in the depletion region, they are separated by the electric field, the electrons move toward the surface, and the holes move toward the substrate. Four pair-generation processes can be identified:

Quantum dot solar cell

A quantum dot solar cell (QDSC) is a solar cell design that uses quantum dots as the absorbing photovoltaic material. It attempts to replace bulk materials such as silicon, copper indium gallium selenide (CIGS) or cadmium telluride (CdTe). Quantum dots have bandgaps that are tunable across a wide range of energy levels by changing their size. In bulk materials, the bandgap is fixed by the choice of material(s). This property makes quantum dots attractive for multi-junction solar cells, where a variety of materials are used to improve efficiency by harvesting multiple portions of the solar spectrum. As of 2019, efficiency exceeds 16.5%. In a conventional solar cell, light is absorbed by a semiconductor, producing an electron-hole (e-h) pair; the pair may be bound and is referred to as an exciton. This pair is separated by an internal electrochemical potential (present in p-n junctions or Schottky diodes) and the resulting flow of electrons and holes creates electric current. The internal electrochemical potential is created by doping one part of semiconductor interface with atoms that act as electron donors (n-type doping) and another with electron acceptors (p-type doping) that results in a p-n junction. Generation of an e-h pair requires that the photons have energy exceeding the bandgap of the material. Effectively, photons with energies lower than the bandgap do not get absorbed, while those that are higher can quickly (within about 10−13 s) thermalize to the band edges, reducing output. The former limitation reduces current, while the thermalization reduces the voltage. As a result, semiconductor cells suffer a trade-off between voltage and current (which can be in part alleviated by using multiple junction implementations). The detailed balance calculation shows that this efficiency can not exceed 33% if one uses a single material with an ideal bandgap of 1.34 eV for a solar cell. The band gap (1.34 eV) of an ideal single-junction cell is close to that of silicon (1.1 eV), one of the many reasons that silicon dominates the market. However, silicon's efficiency is limited to about 30% (Shockley-Queisser limit). It is possible to improve on a single-junction cell by vertically stacking cells with different bandgaps - termed a "tandem" or "multi-junction" approach. The same analysis shows that a two layer cell should have one layer tuned to 1.64 eV and the other to 0.94 eV, providing a theoretical performance of 44%. A three-layer cell should be tuned to 1.83, 1.16 and 0.71 eV, with an efficiency of 48%. An "infinity-layer" cell would have a theoretical efficiency of 86%, with other thermodynamic loss mechanisms accounting for the rest. Traditional (crystalline) silicon preparation methods do not lend themselves to this approach due to lack of bandgap tunability. Thin-films of amorphous silicon, which due to a relaxed requirement in crystal momentum preservation can achieve direct bandgaps and intermixing of carbon, can tune the bandgap, but other issues have prevented these from matching the performance of traditional cells.[4] Most tandem-cell structures are based on higher performance semiconductors, notably indium gallium arsenide (InGaAs). Three-layer InGaAs/GaAs/InGaP cells (bandgaps 0.94/1.42/1.89 eV) hold the efficiency record of 42.3% for experimental examples. However, the QDSCs suffer from weak absorption and the contribution of the light absorption at room temperature is marginal. This can be addressed by utilizing multibranched Au nanostars. Quantum dots are semiconducting particles that have been reduced below the size of the Exciton Bohr radius and due to quantum mechanics considerations, the electron energies that can exist within them become finite, much alike energies in an atom. Quantum dots have been referred to as "artificial atoms". These energy levels are tuneable by changing their size, which in turn defines the bandgap. The dots can be grown over a range of sizes, allowing them to express a variety of bandgaps without changing the underlying material or construction techniques.[7] In typical wet chemistry preparations, the tuning is accomplished by varying the synthesis duration or temperature. The ability to tune the bandgap makes quantum dots desirable for solar cells. For the sun's photon distribution spectrum, the Shockley-Queisser limit indicates that the maximum solar conversion efficiency occurs in a material with a band gap of 1.34 eV. However, materials with lower band gaps will be better suited to generate electricity from lower-energy photons (and vice versa). Single junction implementations using lead sulfide (PbS) colloidal quantum dots (CQD) have bandgaps that can be tuned into the far infrared, frequencies that are typically difficult to achieve with traditional solar cells. Half of the solar energy reaching the Earth is in the infrared, most in the near infrared region. A quantum dot solar cell makes infrared energy as accessible as any other. Moreover, CQD offer easy synthesis and preparation. While suspended in a colloidal liquid form they can be easily handled throughout production, with a fumehood as the most complex equipment needed. CQD are typically synthesized in small batches, but can be mass-produced. The dots can be distributed on a substrate by spin coating, either by hand or in an automated process. Large-scale production could use spray-on or roll-printing systems, dramatically reducing module construction costs.

Tsetse Fly

As small as they may seem, tsetse fly kills around 300,000 people every year. They cause a dreaded disease called African sleeping sickness. A tsetse fly is a large sucking insect that attacks when a person is sleeping.

How Digital Cameras Work

Let's say you want to take a picture and e-mail it to a friend. To do this, you need the image to be represented in the language that computers recognize -- bits and bytes. Essentially, a digital image is just a long string of 1s and 0s that represent all the tiny colored dots -- or pixels -- that collectively make up the image. (For information on sampling and digital representations of data, see this explanation of the digitization of sound waves. Digitizing light waves works in a similar way.) At its most basic level, this is all there is to a digital camera. Just like a conventional camera, it has a series of lenses that focus light to create an image of a scene. But instead of focusing this light onto a piece of film, it focuses it onto a semiconductor device that records light electronically. A computer then breaks this electronic information down into digital data. All the fun and interesting features of digital cameras come as a direct result of this process. Instead of film, a digital camera has a sensor that converts light into electrical charges. The image sensor employed by most digital cameras is a charge coupled device (CCD). Some cameras use complementary metal oxide semiconductor (CMOS) technology instead. Both CCD and CMOS image sensors convert light into electrons. If you've read How Solar Cells Work, you already understand one of the pieces of technology used to perform the conversion. A simplified way to think about these sensors is to think of a 2-D array of thousands or millions of tiny solar cells. Once the sensor converts the light into electrons, it reads the value (accumulated charge) of each cell in the image. This is where the differences between the two main sensor types kick in: A CCD transports the charge across the chip and reads it at one corner of the array. An analog-to-digital converter (ADC) then turns each pixel's value into a digital value by measuring the amount of charge at each photosite and converting that measurement to binary form. CMOS devices use several transistors at each pixel to amplify and move the charge using more traditional wires. Differences between the two types of sensors lead to a number of pros and cons: CCD sensors create high-quality, low-noise images. CMOS sensors are generally more susceptible to noise. Because each pixel on a CMOS sensor has several transistors located next to it, the light sensitivity of a CMOS chip is lower. Many of the photons hit the transistors instead of the photodiode. CMOS sensors traditionally consume little power. CCDs, on the other hand, use a process that consumes lots of power. CCDs consume as much as 100 times more power than an equivalent CMOS sensor. CCD sensors have been mass produced for a longer period of time, so they are more mature. They tend to have higher quality pixels, and more of them. Although numerous differences exist between the two sensors, they both play the same role in the camera -- they turn light into electricity. For the purpose of understanding how a digital camera works, you can think of them as nearly identical devices. Unfortunately, each photosite is colorblind. It only keeps track of the total intensity of the light that strikes its surface. In order to get a full color image, most sensors use filtering to look at the light in its three primary colors. Once the camera records all three colors, it combines them to create the full spectrum. There are several ways of recording the three colors in a digital camera. The highest quality cameras use three separate sensors, each with a different filter. A beam splitter directs light to the different sensors. Think of the light entering the camera as water flowing through a pipe. Using a beam splitter would be like dividing an identical amount of water into three different pipes. Each sensor gets an identical look at the image; but because of the filters, each sensor only responds to one of the primary colors. The advantage of this method is that the camera records each of the three colors at each pixel location. Unfortunately, cameras that use this method tend to be bulky and expensive. Another method is to rotate a series of red, blue and green filters in front of a single sensor. The sensor records three separate images in rapid succession. This method also provides information on all three colors at each pixel location; but since the three images aren't taken at precisely the same moment, both the camera and the target of the photo must remain stationary for all three readings. This isn't practical for candid photography or handheld cameras. Both of these methods work well for professional studio cameras, but they're not necessarily practical for casual snapshots. Next, we'll look at filtering methods that are more suited to small, efficient cameras. A more economical and practical way to record the primary colors is to permanently place a filter called a color filter array over each individual photosite. By breaking up the sensor into a variety of red, blue and green pixels, it is possible to get enough information in the general vicinity of each sensor to make very accurate guesses about the true color at that location. This process of looking at the other pixels in the neighborhood of a sensor and making an educated guess is called interpolation. The most common pattern of filters is the Bayer filter pattern. This pattern alternates a row of red and green filters with a row of blue and green filters. The pixels are not evenly divided -- there are as many green pixels as there are blue and red combined. This is because the human eye is not equally sensitive to all three colors. It's necessary to include more information from the green pixels in order to create an image that the eye will perceive as a "true color." The advantages of this method are that only one sensor is required, and all the color information (red, green and blue) is recorded at the same moment. That means the camera can be smaller, cheaper, and useful in a wider variety of situations. The raw output from a sensor with a Bayer filter is a mosaic of red, green and blue pixels of different intensity. Digital cameras use specialized demosaicing algorithms to convert this mosaic into an equally sized mosaic of true colors. The key is that each colored pixel can be used more than once. The true color of a single pixel can be determined by averaging the values from the closest surrounding pixels. This content is not compatible on this device. Some single-sensor cameras use alternatives to the Bayer filter pattern. X3 technology, for example, embeds red, green and blue photodetectors in silicon. Some of the more advanced cameras subtract values using the typesetting colors cyan, yellow, green and magenta instead of blending red, green and blue. There is even a method that uses two sensors. However, most consumer cameras on the market today use a single sensor with alternating rows of green/red and green/blue filters.

what happens in your body when dizzy?

Lightheadedness often occurs when you move quickly from a seated to a standing position. This positional change results in decreased blood flow to the brain. This can create a drop in blood pressure that makes you feel faint. ... It's often caused by problems with the inner ear, brain, heart, or use of certain medications.

negative energy density

Negative energy is a concept used in physics to explain the nature of certain fields, including the gravitational field and various quantum field effects. In more speculative theories, negative energy is involved in wormholes which may allow for time travel and warp drives for faster-than-light space travel.

My parents told me phones and tech emit dangerous radiation, is it true?

No, it is not. Phones and other devices that broadcast (tablets, laptops, you name it ...) emit electromagnetic (EM) radiation. EM radiation comes in many different forms, but it is typically characterized by its frequency (or wavelength, the two are directly connected). Most mobile devices communicate with EM signals in the frequency range running from a few hundred megahertz (MHz) to a few gigahertz (GHz). So what happens when we're hit with EM radiation? Well, it depends on the frequency. The frequency of the radiation determines the energy of the individual photons that make up the radiation. Higher frequency = higher energy photons. If photons have sufficiently high energy, they can damage a molecule and, by extension, a cell in your body. There's no exact frequency threshold from which point on EM radiation can cause damage in this way, but 1 petahertz (PHz, or 1,000,000 GHz) is a good rough estimate. For photons that don't have this much energy, the most they can hope to achieve is to see their energy converted into heat. Converting EM radiation into a heat is the #1 activity of a very popular kitchen appliance: The microwave oven. This device emits EM radiation with a frequency of about 2.4 GHz to heat your milk and burn your noodles (while leaving parts of the meal suspiciously cold). The attentive reader should now say to themselves: Wait a minute! This 2.4 GHz of the microwave oven is right there between the "few hundred MHz" and "few GHz" frequency range of our mobile devices. So are our devices mini-microwave ovens? As it turns out, 2.4 GHz is also the frequency used by many wifi routers (and devices connecting to them) (which coincidentally is the reason why poorly shielded microwave ovens can cause dropped wifi connections when active). But this is where the second important variable that determines the effects of EM radiation comes into play: intensity. A microwave oven operates with a power of somewhere around the 1,000 W (depending on the model), whereas a router has a broadcast power that is limited (by law, in most countries) to 0.1 W. That makes a microwave oven 10,000 more powerful than a wifi router at maximum output. And mobile devices typically broadcast at even lower intensities, to conserve battery. And while microwave ovens are designed to focus their radiation on a small volume in the interior of the oven, routers and mobile devices throw their radiation out in every direction. So, not only is EM radiation emitted by our devices not energetic enough to cause direct damage, the intensity with which it is emitted is orders of magnitude lower to cause any noticeable heating. But to close, I would like to discuss one more source of EM radiation. A source from which we receive radiation with frequencies ranging from 100 terahertz (THz) to 1 PHz or even slightly more. Yes, that overlaps with the range of potentially damaging radiation. And even more, the intensity of this radiation varies, but can reach up to tens of W. That's not the total emitted, but the total that directly reaches a human being. Not quite microwave oven level, but enough to make you feel much hotter when exposed to it. So what is this source of EM radiation and why isn't it banned yet? The source is none other than the Sun. (And it's probably not yet banned due to the powerful agricultural lobby.) Our Sun blasts us with radiation that is far more energetic (to the point where it can be damaging) than anything our devices produce and with far greater intensity. Even indoors, behind a window, you'll receive so much more energy from the Sun (directly or indirectly when reflected by the sky or various objects) than you do from the ensemble of our mobile devices.

stromatolites

Oldest known fossils formed from many layers of bacteria and sediment.

Planck's law

Planck's law describes the spectral density of electromagnetic radiation emitted by a black body in thermal equilibrium at a given temperature T, when there is no net flow of matter or energy between the body and its environment.

is there a stable version of element 115?

Researchers in Sweden have confirmed the existence of element 115. It sticks around for a surprisingly long time. Scientists believe it may bring them closer to the mythical "island of stability" a whole slew of super-heavy elements that could last for days or even years Moscovium is a synthetic chemical element with the symbol Mc and atomic number 115.

square wombat poop

Scientists say they have uncovered how and why wombats produce cube-shaped poo - the only known species to do so. The Australian marsupial can pass up to 100 deposits of poop a night and they use the piles to mark territory. The shape helps it stop rolling away

Statistical physics

Statistical physics is a branch of physics that uses methods of probability theory and statistics, and particularly the mathematical tools for dealing with large populations and approximations, in solving physical problems. It can describe a wide variety of fields with an inherently stochastic nature. Its applications include many problems in the fields of physics, biology, chemistry, neuroscience, and even some social sciences, such as sociology[1] and linguistics.[2] Its main purpose is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion.[3] tbh id argue quantum mechanics as stats

Supercritical drying

Supercritical drying, also known as critical point drying, is a process to remove liquid in a precise and controlled way.[1] It is useful in the production of microelectromechanical systems (MEMS), the drying of spices, the production of aerogel, the decaffeination of coffee and in the preparation of biological specimens for scanning electron microscopy. As the substance in a liquid body crosses the boundary from liquid to gas (see green arrow in phase diagram), the liquid changes into gas at a finite rate, while the amount of liquid decreases. When this happens within a heterogeneous environment, surface tension in the liquid body pulls against any solid structures the liquid might be in contact with. Delicate structures such as cell walls, the dendrites in silica gel, and the tiny machinery of microelectromechanical devices, tend to be broken apart by this surface tension as the liquid-gas-solid junction moves by. - involves steam drying of products containing water - this process is feasible because water in the product is boiled off, and joined with the drying medium, increasing its flow - usually employed in closed circuit and allows a proportion of latent heat to be recovered by recompression

Luddite origin

The Luddites were a secret oath-based organization of English textile workers in the 19th century, a radical faction which destroyed textile machinery as a form of protest. The group was protesting against the use of machinery in a "fraudulent and deceitful manner" to get around standard labour practices.

Tsiolkovsky rocket equation

The Tsiolkovsky rocket equation, classical rocket equation, or ideal rocket equation is a mathematical equation that describes the motion of vehicles that follow the basic principle of a rocket: a device that can apply acceleration to itself using thrust by expelling part of its mass with high velocity can thereby move due to the conservation of momentum. The equation relates the delta-v (the maximum change of velocity of the rocket if no other external forces act) to the effective exhaust velocity and the initial and final mass of a rocket, or other reaction engine.

Weissenberg effect

The Weissenberg effect is a phenomenon that occurs when a spinning rod is inserted into a solution of elastic liquid. Instead of being thrown outward, the solution is drawn towards the rod and rises up around it. This is a direct consequence of the normal stress that acts like a hoop stress around the rod.

water absorption spectrum

The absorption of electromagnetic radiation by water depends on the state of the water. The absorption in the gas phase occurs in three regions of the spectrum. Rotational transitions are responsible for absorption in the microwave and far-infrared, vibrational transitions in the mid-infrared and near-infrared. Vibrational bands have rotational fine structure. Electronic transitions occur in the vacuum ultraviolet regions. Liquid water has no rotational spectrum but does absorb in the microwave region. Its weak absorption in the visible spectrum results in the pale blue color of water.

Magnetic tape

The tape itself is actually very simple. It consists of a thin plastic base material, and bonded to this base is a coating of ferric oxide powder. The oxide is normally mixed with a binder to attach it to the plastic, and it also includes some sort of dry lubricant to avoid wearing out the recorder. Iron oxide (FeO) is the red rust we commonly see. Ferric oxide (Fe2O3) is another oxide of iron. Maghemite or gamma ferric oxide are common names for the substance. This oxide is a ferromagnetic material, meaning that if you expose it to a magnetic field it is permanently magnetized by the field. That ability gives magnetic tape two of its most appealing features: You can record anything you want instantly and the tape will remember what you recorded for playback at any time. You can erase the tape and record something else on it any time you like. These two features are what make tapes and disks so popular -- they are instant and they are easily changed. Audio tapes have gone through several format changes over the years. The original format was not tape at all, but actually was a thin steel wire. The wire recorder was invented in 1900 by Valdemar Poulsen. German engineers perfected the first tape recorders using oxide tapes in the 1930s. Tapes originally appeared in a reel-to-reel format. See this page for a picture of an early reel-to-reel recorder. Reel-to-reel tapes were common until the compact cassette or "cassette tape" took hold of the market. The cassette was patented in 1964 and eventually beat out 8-track tapes and reel-to-reel to become the dominant tape format in the audio industry. If you look inside a compact cassette, you will find that it is a fairly simple device. There are two spools and the long piece of tape, two rollers and two halves of a plastic outer shell with various holes and cutouts to hook the cassette into the drive. There is also a small felt pad that acts as a backstop for the record/playback head in the tape player. In a 90-minute cassette, the tape is 443 feet (135 meters) long. ________ The basic idea involves an electromagnet that applies a magnetic flux to the oxide on the tape. The oxide permanently "remembers" the flux it sees. A tape recorder's record head is a very small, circular electromagnet with a small gap in it, like this: This electromagnet is tiny -- perhaps the size of a flattened pea. The electromagnet consists of an iron core wrapped with wire, as shown in the figure. During recording, the audio signal is sent through the coil of wire to create a magnetic field in the core. At the gap, magnetic flux forms a fringe pattern to bridge the gap (shown in red), and this flux is what magnetizes the oxide on the tape. During playback, the motion of the tape pulls a varying magnetic field across the gap. This creates a varying magnetic field in the core and therefore a signal in the coil. This signal is amplified to drive the speakers. In a normal cassette player, there are actually two of these small electromagnets that together are about as wide as one half of the tape's width. The two heads record the two channels of a stereo program, like this: When you turn the tape over, you align the other half of the tape with the two electromagnets. At the top of this picture are the two sprockets that engage the spools inside the cassette. These sprockets spin one of the spools to take up the tape during recording, playback, fast forward and reverse. Below the two sprockets are two heads. The head on the left is a bulk erase head to wipe the tape clean of signals before recording. The head in the center is the record and playback head containing the two tiny electromagnets. On the right are the capstan and the pinch roller, as seen below: The capstan revolves at a very precise rate to pull the tape across the head at exactly the right speed. The standard speed is 1.875 inches per second (4.76 cm per second). The roller simply applies pressure so that the tape is tight against the capstan. Type 0 - This is the original ferric-oxide tape. It is very rarely seen these days. Type 1 - This is standard ferric-oxide tape, also referred to as "normal bias." Type 2 - This is "chrome" or CrO2 tape. The ferric-oxide particles are mixed with chromium dioxide. Type 4 - This is "metal" tape. Metallic particles rather than metal-oxide particles are used in the tape.

longtermism

The practice of making decisions with a view to long-term objectives or consequences having foresight

absorption spectrum

The range of a pigment's ability to absorb various wavelengths of light. Absorption spectroscopy refers to spectroscopic techniques that measure the absorption of radiation, as a function of frequency or wavelength, due to its interaction with a sample. The sample absorbs energy, i.e., photons, from the radiating field

Cronus

Titan ruler of the universe; father of Zeus He overthrew his father and ruled during the mythological Golden Age, until he was overthrown by his own son Zeus and imprisoned in Tartarus. He ruled the cosmos during the Golden Age after castrating and deposing his father Ouranos (Uranus, Sky).

When is leap year?

We add a leap day every four years, except for every 100 years, except for every 400 years. the year is divisible by 400, then it is a leap year. So 1996 was a leap year, but 1997, 1998, and 1999 were not. The year 2000 was a leap year, because even though it is divisible by 100 it's also divisible by 400

Fermi paradox

Where is everybody? universe be big doe, where da like at cuz?

copernican universe

heliocentrism

gyrate

move or cause to move in a circle or spiral, especially quickly.

Anthropic Principle

the theory that the universe contains all the necessary properties that make the existence of intelligent life inevitable an observer will always observe a universe that can make observers

Stargate

A Stargate is an Einstein-Rosen bridge portal device within the Stargate fictional universe that allows practical, rapid travel between two distant locations.

Bussard ramjet

A proposed space travel technology using enormous electromagnetic fields as a scoop to collect and compress hydrogen and trigger thermonuclear fusion, thus powering the vessel. Bussard[1] proposed a ramjet variant of a fusion rocket capable of reasonable interstellar travel, using enormous electromagnetic fields (ranging from kilometers to many thousands of kilometers in diameter) as a ram scoop to collect and compress hydrogen from the interstellar medium. High speeds force the reactive mass into a progressively constricted magnetic field, compressing it until thermonuclear fusion occurs. The magnetic field then directs the energy as rocket exhaust opposite to the intended direction of travel, thereby accelerating the vessel.

CRISPR-Cas12a

Alt-R CRISPR-Cas12a System Simple, 2-step delivery of ribonucleoprotein complexes (crRNA:Cas12a) CRISPR-Cas12a genome editing method uses the Cas12a endonuclease to generate double-stranded breaks that contain a staggered 5′ overhang. Cas12a requires only a single CRISPR RNA (crRNA) to specify the DNA target sequence (Figure 1). After cleavage, DNA is then repaired by non-homologous end-joining (NHEJ) or homology-directed recombination (HDR), resulting in a modified sequence. Alt-R CRISPR-Cas12a reagents provide essential, optimized tools needed to use this pathway for genome editing research. A brief comparison of CRISPR-Cas9 and CRISPR-Cas12a is provided at the end of this section. https://www.idtdna.com/pages/products/crispr-genome-editing/alt-r-crispr-cpf1-genome-editing no clue..

Tricritical point

In condensed matter physics, dealing with the macroscopic physical properties of matter, a tricritical point is a point in the phase diagram of a system at which three-phase coexistence terminates.[1] This definition is clearly parallel to the definition of an ordinary critical point as the point at which two-phase coexistence terminates.

de Sitter invariant special relativity

In mathematical physics, de Sitter invariant special relativity is the speculative idea that the fundamental symmetry group of spacetime is the indefinite orthogonal group SO(4,1), that of de Sitter space. In the standard theory of general relativity, de Sitter space is a highly symmetrical special vacuum solution, which requires a cosmological constant or the stress-energy of a constant scalar field to sustain. The idea of de Sitter invariant relativity is to require that the laws of physics are not fundamentally invariant under the Poincaré group of special relativity, but under the symmetry group of de Sitter space instead. With this assumption, empty space automatically has de Sitter symmetry, and what would normally be called the cosmological constant in general relativity becomes a fundamental dimensional parameter describing the symmetry structure of spacetime.

Double pendulum

In physics and mathematics, in the area of dynamical systems, a double pendulum is a pendulum with another pendulum attached to its end, and is a simple physical system that exhibits rich dynamic behavior with a strong sensitivity to initial conditions.

Abraham-Lorentz force

In the physics of electromagnetism, the Abraham-Lorentz force (also Lorentz-Abraham force) is the recoil force on an accelerating charged particle caused by the particle emitting electromagnetic radiation. It is also called the radiation reaction force, radiation damping force[1] or the self-force.

what makes you itch?

Itching is often triggered by histamine, a chemical in the body associated with immune responses. It causes the itch and redness you see with insect bites, rashes and skin dryness or damage.

Virtue signalling

Showing, through your actions, your allegiance to a particular group or ideology showing people how good/dedicated you are the action or practice of publicly expressing opinions or sentiments intended to demonstrate one's good character or the moral correctness of one's position on a particular issue.

Supercritical water oxidation

Supercritical water oxidation (SCWO) is a process that occurs in water at temperatures and pressures above a mixture's thermodynamic critical point. Under these conditions water becomes a fluid with unique properties that can be used to advantage in the destruction of hazardous wastes such as PCBs. The fluid has a density between that of water vapor and liquid at standard conditions, and exhibits high gas-like diffusion rates along with high liquid-like collision rates. In addition, the behavior of water as a solvent is altered (in comparison to that of subcritical liquid water) - it behaves much less like a polar solvent. As a result, the solubility behavior is "reversed" so that chlorinated hydrocarbons become soluble in the water, allowing single-phase reaction of aqueous waste with a dissolved oxidizer. The reversed solubility also causes salts to precipitate out of solution, meaning they can be treated using conventional methods for solid-waste residuals. Efficient oxidation reactions occur at low temperature (400-650 °C) with reduced NOx production.

geiger counter

The counter consists of a tube filled with an inert gas that becomes conductive of electricity when it is impacted by a high-energy particle. When a Geiger counter is exposed to ionizing radiation, the particles penetrate the tube and collide with the gas, releasing more electrons.

Lower critical solution temperature

The lowest temperature at which phase separation occurs The lower critical solution temperature (LCST) or lower consolute temperature is the critical temperature below which the components of a mixture are miscible for all compositions.[1][2] The word lower indicates that the LCST is a lower bound to a temperature interval of partial miscibility, or miscibility for certain compositions only. The phase behavior of polymer solutions is an important property involved in the development and design of most polymer-related processes. Partially miscible polymer solutions often exhibit two solubility boundaries, the upper critical solution temperature (UCST) and the lower critical solution temperature (LCST), which both depend on the molar mass and the pressure. At temperatures below LCST, the system is completely miscible in all proportions, whereas above LCST partial liquid miscibility occurs.[3][4]

Weak gravity conjecture

The weak gravity conjecture (WGC) is a conjecture regarding the strength gravity can have in a theory of quantum gravity relative to the gauge forces in that theory. It roughly states that gravity should be the weakest force in any consistent theory of quantum gravity.[1]

blue whale heart rate

When diving, the whale's heart slowed to 4-8 beats per minute and a minimum of two beats per minute. When the whale was at the bottom of the ocean feeding, that heartbeat raised 2.5 times more than the minimum, and then gradually slowed again. whale's heart rate see-sawed wildly, pumping as many as 34 times per minute at the surface and as few as just two beats per minute at the deepest depths — about 30% to 50% slower than the researchers expected

Zeitgeber

stimulus that resets the circadian rhythm A zeitgeber is any external or environmental cue that entrains or synchronizes an organism's biological rhythms to the Earth's 24-hour light/dark cycle and 12-month cycle.

Cholesterol

Cholesterol is a waxy, fat-like compound that belongs to a class of molecules called steroids. It's found in many foods, in your bloodstream and in all your body's cells. If you had a handful of cholesterol, it might feel like a soft, melted candle. Cholesterol is essential for: Formation and maintenance of cell membranes (helps the cell to resist changes in temperature and protects and insulates nerve fibers) Formation of sex hormones (progesterone, testosterone, estradiol, cortisol) Production of bile salts, which help to digest food Conversion into vitamin D in the skin when exposed to sunlight. It may surprise you to know that our bodies make all the cholesterol we need. When your doctor takes a blood test to measure your cholesterol level, the doctor is actually measuring the amount of circulating cholesterol in your blood, or your blood cholesterol level. About 85 percent of your blood cholesterol level is endogenous, which means it is produced by your body. The other 15 percent or so comes from an external source -- your diet. Your dietary cholesterol originates from meat, poultry, fish, seafood and dairy products. It's possible for some people to eat foods high in cholesterol and still have low blood cholesterol levels. Likewise, it's possible to eat foods low in cholesterol and have a high blood cholesterol level. This increase in dietary cholesterol has been associated with atherosclerosis, the build-up of plaques that can narrow or block blood vessels. (Think about what happens to your kitchen drain pipes when you pour chicken fat down the sink.) If the coronary arteries of the heart become blocked, a heart attack can occur. The blocked artery can also develop rough edges. This can cause plaques to break off and travel, obstructing blood vessels elsewhere in the body. A blocked blood vessel in the brain can trigger a stroke. Comments about "good" and "bad" cholesterol refer to the type of carrier molecule that transports the cholesterol. These carrier molecules are made of protein and are called apoproteins. They are necessary because cholesterol and other fats (lipids) can't dissolve in water, which also means they can't dissolve in blood. When these apoproteins are joined with cholesterol, they form a compound called lipoproteins. The density of these lipoproteins is determined by the amount of protein in the molecule. "Bad" cholesterol is the low-density lipoprotein (LDL), the major cholesterol carrier in the blood. High levels of these LDLs are associated with atherosclerosis. "Good" cholesterol is the high-density lipoprotein (HDL); a greater level of HDL--think of this as drain cleaner you pour in the sink--is thought to provide some protection against artery blockage. Cholesterol (from the Ancient Greek chole- (bile) and stereos (solid), followed by the chemical suffix -ol for an alcohol) is an organic molecule. It is a sterol (or modified steroid),a type of lipid. Cholesterol is biosynthesized by all animal cells and is an essential structural component of animal cell membranes. Cholesterol also serves as a precursor for the biosynthesis of steroid hormones, bile acid and vitamin D. Cholesterol is the principal sterol synthesized by all animals. In vertebrates, hepatic cells typically produce the greatest amounts. It is absent among prokaryotes (bacteria and archaea), although there are some exceptions, such as Mycoplasma, which require cholesterol for growth. Cholesterol is essential for all animal life, with each cell capable of synthesizing it by way of a complex 37-step process. This begins with the mevalonate or HMG-CoA reductase pathway, the target of statin drugs, which encompasses the first 18 steps. This is followed by 19 additional steps to convert the resulting lanosterol into cholesterol. A human male weighing 68 kg (150 lb) normally synthesizes about 1 gram (1,000 mg) of cholesterol per day, and his body contains about 35 g, mostly contained within the cell membranes. Typical daily cholesterol dietary intake for a man in the United States is 307 mg. Most ingested cholesterol is esterified, which causes it to be poorly absorbed by the gut. The body also compensates for absorption of ingested cholesterol by reducing its own cholesterol synthesis.[9] For these reasons, cholesterol in food, seven to ten hours after ingestion, has little, if any effect on concentrations of cholesterol in the blood.[10] However, during the first seven hours after ingestion of cholesterol, as absorbed fats are being distributed around the body within extracellular water by the various lipoproteins (which transport all fats in the water outside cells), the concentrations increase. Cholesterol, given that it composes about 30% of all animal cell membranes, is required to build and maintain membranes and modulates membrane fluidity over the range of physiological temperatures. The hydroxyl group of each cholesterol molecule interacts with water molecules surrounding the membrane, as do the polar heads of the membrane phospholipids and sphingolipids, while the bulky steroid and the hydrocarbon chain are embedded in the membrane, alongside the nonpolar fatty-acid chain of the other lipids. Through the interaction with the phospholipid fatty-acid chains, cholesterol increases membrane packing, which both alters membrane fluidity[16] and maintains membrane integrity so that animal cells do not need to build cell walls (like plants and most bacteria). The membrane remains stable and durable without being rigid, allowing animal cells to change shape and animals to move.

belousov zhabotinsky reaction

A Belousov-Zhabotinsky reaction, or BZ reaction, is one of a class of reactions that serve as a classical example of non-equilibrium thermodynamics, resulting in the establishment of a nonlinear chemical oscillator. The only common element in these oscillators is the inclusion of bromine and an acid. The reactions are important to theoretical chemistry in that they show that chemical reactions do not have to be dominated by equilibrium thermodynamic behavior. These reactions are far from equilibrium and remain so for a significant length of time and evolve chaotically.[1] In this sense, they provide an interesting chemical model of nonequilibrium biological[clarification needed] phenomena; as such, mathematical models and simulations of the BZ reactions themselves are of theoretical interest, showing phenomenon as noise-induced order.[2]

Michelson interferometer

A Michelson interferometer is an optical instrument that produces interference fringes by dividing the source radiation with a beamsplitter such that one beam is directed into an arm of the interferometer and strikes a fixed mirror, while a second beam is directed into a different arm of the interferometer and strikes a movable mirror. When the reflected beams are brought back together, an interference pattern (the interferogram) results. The Michelson interferometer produces interference fringes by splitting a beam of light so that one beam strikes a fixed mirror and the other a movable mirror. When the reflected beams are brought back together, an interference pattern results. Used by LIGO

what polygraph does?

A polygraph, popularly referred to as a lie detector test, is a device or procedure that measures and records several physiological indicators such as blood pressure, pulse, respiration, and skin conductivity while a person is asked and answers a series of questions. Can supposedly be beaten if you daydream, get into the foggy mind state, or have a pin in your shoe to maintain constant pain

Supercritical fluid extraction

A relatively new technique for extracting compounds from a raw material, which often employ supercritical CO2 Supercritical fluid extraction (SFE) is the process of separating one component (the extractant) from another (the matrix) using supercritical fluids as the extracting solvent. Extraction is usually from a solid matrix, but can also be from liquids. SFE can be used as a sample preparation step for analytical purposes, or on a larger scale to either strip unwanted material from a product (e.g. decaffeination) or collect a desired product (e.g. essential oils). These essential oils can include limonene and other straight solvents. Carbon dioxide (CO2) is the most used supercritical fluid, sometimes modified by co-solvents such as ethanol or methanol. Extraction conditions for supercritical carbon dioxide are above the critical temperature of 31 °C and critical pressure of 74 bar. Addition of modifiers may slightly alter this. The discussion below will mainly refer to extraction with CO2, except where specified. The properties of the supercritical fluid can be altered by varying the pressure and temperature, allowing selective extraction. For example, volatile oils can be extracted from a plant with low pressures (100 bar), whereas liquid extraction would also remove lipids. Lipids can be removed using pure CO2 at higher pressures, and then phospholipids can be removed by adding ethanol to the solvent.[1] The same principle can be used to extract polyphenols and unsaturated fatty acids separately from wine wastes.[2] Extraction is a diffusion-based process, in which the solvent is required to diffuse into the matrix and the extracted material to diffuse out of the matrix into the solvent. Diffusivities are much faster in supercritical fluids than in liquids, and therefore extraction can occur faster. In addition, due to the lack of surface tension and negligible viscosities compared to liquids, the solvent can penetrate more into the matrix inaccessible to liquids. An extraction using an organic liquid may take several hours, whereas supercritical fluid extraction can be completed in 10 to 60 minutes.[3]

Turbofan engine again

A turbofan engine, sometimes referred to as a fanjet or bypass engine, is a jet engine variant which produces thrust using a combination of jet core efflux and bypass air which has been accelerated by a ducted fan that is driven by the jet core. The turbofan or fanjet is a type of airbreathing jet engine that is widely used in aircraft propulsion. The word "turbofan" is a portmanteau of "turbine" and "fan": the turbo portion refers to a gas turbine engine which achieves mechanical energy from combustion,[1] and the fan, a ducted fan that uses the mechanical energy from the gas turbine to accelerate air rearwards. Thus, whereas all the air taken in by a turbojet passes through the turbine (through the combustion chamber), in a turbofan some of that air bypasses the turbine. A turbofan thus can be thought of as a turbojet being used to drive a ducted fan, with both of these contributing to the thrust. The ratio of the mass-flow of air bypassing the engine core divided by the mass-flow of air passing through the core is referred to as the bypass ratio. The engine produces thrust through a combination of these two portions working together; engines that use more jet thrust relative to fan thrust are known as low-bypass turbofans, conversely those that have considerably more fan thrust than jet thrust are known as high-bypass. Most commercial aviation jet engines in use today are of the high-bypass type,[2][3] and most modern military fighter engines are low-bypass.[4][5] Afterburners are not used on high-bypass turbofan engines but may be used on either low-bypass turbofan or turbojet engines. Modern turbofans have either a large single-stage fan or a smaller fan with several stages. An early configuration combined a low-pressure turbine and fan in a single rear-mounted unit.

turbofan engine

A turbofan engine, sometimes referred to as a fanjet or bypass engine, is a jet engine variant which produces thrust using a combination of jet core efflux and bypass air which has been accelerated by a ducted fan that is driven by the jet core. The turbofan or fanjet is a type of airbreathing jet engine that is widely used in aircraft propulsion. The word "turbofan" is a portmanteau of "turbine" and "fan": the turbo portion refers to a gas turbine engine which achieves mechanical energy from combustion,[1] and the fan, a ducted fan that uses the mechanical energy from the gas turbine to accelerate air rearwards. Thus, whereas all the air taken in by a turbojet passes through the turbine (through the combustion chamber), in a turbofan some of that air bypasses the turbine. A turbofan thus can be thought of as a turbojet being used to drive a ducted fan, with both of these contributing to the thrust. The ratio of the mass-flow of air bypassing the engine core divided by the mass-flow of air passing through the core is referred to as the bypass ratio. The engine produces thrust through a combination of these two portions working together; engines that use more jet thrust relative to fan thrust are known as low-bypass turbofans, conversely those that have considerably more fan thrust than jet thrust are known as high-bypass. Most commercial aviation jet engines in use today are of the high-bypass type,[2][3] and most modern military fighter engines are low-bypass.[4][5] Afterburners are not used on high-bypass turbofan engines but may be used on either low-bypass turbofan or turbojet engines. Modern turbofans have either a large single-stage fan or a smaller fan with several stages. An early configuration combined a low-pressure turbine and fan in a single rear-mounted unit.

turbo shaft engine

A turboshaft engine is a variant of a jet engine that has been optimised to produce shaft power to drive machinery instead of producing thrust. Turboshaft engines are most commonly used in applications that require a small, but powerful, lightweight engine, inclusive of helicopters and auxiliary power units. A turboshaft engine is a form of gas turbine that is optimized to produce shaftpower rather than jet thrust. In concept, turboshaft engines are very similar to turbojets, with additional turbine expansion to extract heat energy from the exhaust and convert it into output shaft power. They are even more similar to turboprops, with only minor differences, and a single engine is often sold in both forms. Turboshaft engines are commonly used in applications that require a sustained high power output, high reliability, small size, and lightweight. These include helicopters, auxiliary power units, boats and ships, tanks, hovercraft, and stationary equipment.

allotrope vs isomer

Allotropes are different forms of the same element. They contain only one type of atoms. For example, both diamond and graphite contain the same element which is carbon. Isomers are different compounds having the same chemical composition (molecular formula), but they are always composed of two or more elements.

Axial compressor

An axial compressor is a gas compressor that can continuously pressurize gases. It is a rotating, airfoil-based compressor in which the gas or working fluid principally flows parallel to the axis of rotation, or axially. This differs from other rotating compressors such as centrifugal compressor, axi-centrifugal compressors and mixed-flow compressors where the fluid flow will include a "radial component" through the compressor. The energy level of the fluid increases as it flows through the compressor due to the action of the rotor blades which exert a torque on the fluid. The stationary blades slow the fluid, converting the circumferential component of flow into pressure. Compressors are typically driven by an electric motor or a steam or a gas turbine. Axial flow compressors produce a continuous flow of compressed gas, and have the benefits of high efficiency and large mass flow rate, particularly in relation to their size and cross-section. They do, however, require several rows of airfoils to achieve a large pressure rise, making them complex and expensive relative to other designs (e.g. centrifugal compressors). Axial compressors are integral to the design of large gas turbines such as jet engines, high speed ship engines, and small scale power stations. They are also used in industrial applications such as large volume air separation plants, blast furnace air, fluid catalytic cracking air, and propane dehydrogenation. Due to high performance, high reliability and flexible operation during the flight envelope, they are also used in aerospace engines.

exciton

An exciton is a bound state of an electron and an electron hole which are attracted to each other by the electrostatic Coulomb force. It is an electrically neutral quasiparticle that exists in insulators, semiconductors and some liquids. The exciton is regarded as an elementary excitation of condensed matter that can transport energy without transporting net electric charge. An exciton can form when a material absorbs a photon of higher energy than its bandgap.[4] This excites an electron from the valence band into the conduction band. In turn, this leaves behind a positively charged electron hole (an abstraction for the location from which an electron was moved). The electron in the conduction band is then effectively attracted to this localized hole by the repulsive Coulomb forces from large numbers of electrons surrounding the hole and excited electron. This attraction provides a stabilizing energy balance. Consequently, the exciton has slightly less energy than the unbound electron and hole. The wavefunction of the bound state is said to be hydrogenic, an exotic atom state akin to that of a hydrogen atom. However, the binding energy is much smaller and the particle's size much larger than a hydrogen atom. This is because of both the screening of the Coulomb force by other electrons in the semiconductor (i.e., its relative permittivity), and the small effective masses of the excited electron and hole. The recombination of the electron and hole, i.e., the decay of the exciton, is limited by resonance stabilization due to the overlap of the electron and hole wave functions, resulting in an extended lifetime for the exciton.

innate immune system vs adaptive immunity

Innate immunity refers to nonspecific defense mechanisms that come into play immediately or within hours of an antigen's appearance in the body. ... The innate immune response is activated by chemical properties of the antigen. Adaptive immunity. Adaptive immunity refers to antigen-specific immune response.

Infrared heater

An infrared heater or heat lamp is a body with a higher temperature which transfers energy to a body with a lower temperature through electromagnetic radiation. Depending on the temperature of the emitting body, the wavelength of the peak of the infrared radiation ranges from 780 nm to 1 mm. No contact or medium between the two bodies is needed for the energy transfer. Infrared heaters can be operated in vacuum or atmosphere. The most common filament material used for electrical infrared heaters is tungsten wire, which is coiled to provide more surface area. Low temperature alternatives for tungsten are carbon, or alloys of iron, chromium, and aluminum (trademark and brand name Kanthal). While carbon filaments are more fickle to produce, they heat up much more quickly than a comparable medium-wave heater based on a FeCrAl filament. When light is undesirable or not necessary in a heater, ceramic infrared radiant heaters are the preferred choice. Containing 8 meters of coiled alloy resistance wire, they emit a uniform heat across the entire surface of the heater and the ceramic is 90% absorbent of the radiation. As absorption and emission are based on the same physical causes in each body, ceramic is ideally suited as a material for infrared heaters. Industrial infrared heaters sometimes use a gold coating on the quartz tube that reflects the infrared radiation and directs it towards the product to be heated. Consequently, the infrared radiation impinging on the product is virtually doubled. Gold is used because of its oxidation resistance and very high infrared reflectivity of approximately 95%.[4] Quartz tungsten infrared heaters emit medium wave energy reaching operating temperatures of up to 1500 °C (medium wave) and 2600 °C (short wave). They reach operating temperature within seconds. Peak wavelength emissions of approximately 1.6 μm (medium wave infrared) and 1 μm (short wave infrared).

Macrocosm and microcosm

Big order of things within the smaller order of things; you are looking for the macrocosm inside the microcosm but not the other way around because of simple logistics find the big in the small and the small in the big Macrocosm and microcosm refers to a vision of cosmos where the part (microcosm) reflects the whole (macrocosm) and vice versa. It is a feature present in many esoteric models of philosophy, both ancient and modern.[2] It is closely associated with Hermeticism and underlies practices such as astrology, alchemy and sacred geometry with its premise of "As Above, So Below".[3]

Doubly special relativity

Doubly special relativity[1][2][3] (DSR) - also called deformed special relativity or, by some, extra-special relativity - is a modified theory of special relativity in which there is not only an observer-independent maximum velocity (the speed of light), but an observer-independent maximum energy scale and minimum length scale (the Planck energy and Planck length). Experiments to date have not observed contradictions to special relativity (see Modern searches for Lorentz violation).

Gravitoelectromagnetism

Gravitoelectromagnetism, abbreviated GEM, refers to a set of formal analogies between the equations for electromagnetism and relativistic gravitation; specifically: between Maxwell's field equations and an approximation, valid under certain conditions, to the Einstein field equations for general relativity. Gravitomagnetism is a widely used term referring specifically to the kinetic effects of gravity, in analogy to the magnetic effects of moving electric charge. The most common version of GEM is valid only far from isolated sources, and for slowly moving test particles.

Evanescent field

In electromagnetics, an evanescent field, or evanescent wave, is an oscillating electric and/or magnetic field that does not propagate as an electromagnetic wave but whose energy is spatially concentrated in the vicinity of the source (oscillating charges and currents). Even when there is a propagating electromagnetic wave produced (e.g., by a transmitting antenna), one can still identify as an evanescent field the component of the electric or magnetic field that cannot be attributed to the propagating wave observed at a distance of many wavelengths (such as the far field of a transmitting antenna). A hallmark of an evanescent field is that there is no net energy flow in that region. Since the net flow of electromagnetic energy is given by the average Poynting vector, this means that the Poynting vector in these regions, as averaged over a complete oscillation cycle, is zero.[note 1] In many cases one cannot simply say that a field is or is not evanescent. For instance, in the above illustration, energy is indeed transmitted in the horizontal direction. The field strength drops off exponentially away from the surface, leaving it concentrated in a region very close to the interface, for which reason this is referred to as a surface wave.[1] However, there is no propagation of energy away from (or toward) the surface (in the z direction), so that one could properly describe the field as being "evanescent in the z direction". This is one illustration of the inexactness of the term. In most cases where they exist, evanescent fields are simply thought of and referred to as electric or magnetic fields, without the evanescent property (zero average Poynting vector in one or all directions) ever being pointed out. The term is especially applied to differentiate a field or solution from cases where one normally expects a propagating wave.

quadrupole waves

In general relativity, the quadrupole formula describes the rate at which gravitational waves are emitted from a system of masses based on the change of the (mass) quadrupole moment.

Event symmetry

In physics, event symmetry includes invariance principles that have been used in some discrete approaches to quantum gravity where the diffeomorphism invariance of general relativity can be extended to a covariance under every permutation of spacetime events. It is not immediately obvious how event symmetry could work. It seems to say that taking one part of space time and swapping it with another part a long distance away is a valid physical operation, and that the laws of physics must be written to support this. Clearly this symmetry can only be correct if it is hidden or broken. To get this in perspective consider what the symmetry of general relativity seems to say. A smooth coordinate transformation or diffeomorphism can stretch and twist spacetime in any way so long as it is not torn. The laws of general relativity are unchanged in form under such a transformation. Yet this does not mean that objects can be stretched or bent without being opposed by a physical force. Likewise, event symmetry does not mean that objects can be torn apart in the way the permutations of spacetime would make us believe. In the case of general relativity the gravitational force acts as a background field that controls the measurement properties of spacetime. In ordinary circumstances the geometry of space is flat and Euclidean and the diffeomorphism invariance of general relativity is hidden thanks to this background field. Only in the extreme proximity of a violent collision of black holes would the flexibility of spacetime become apparent. In a similar way, event symmetry could be hidden by a background field that determines not just the geometry of spacetime, but also its topology. General relativity is often explained in terms of curved spacetime. We can picture the universe as the curved surface of a membrane like a soap film that changes dynamically in time. The same picture can help us understand how event symmetry would be broken. A soap bubble is made from molecules that interact via forces that depend on the orientations of the molecules and the distance between them. If you wrote down the equations of motion for all the molecules in terms of their positions, velocities and orientations, then those equations would be unchanged in form under any permutation of the molecules (which we will assume are all the same). This is mathematically analogous to the event symmetry of spacetime events. The equations may be different, and unlike the molecules on the surface of a bubble, the events of spacetime are not embedded in a higher-dimensional space, yet the mathematical principle is the same.

Swampland (physics)

In physics, the term swampland refers to effective low-energy physical theories which are not compatible with string theory. Physical theories which are compatible are called "landscape". Recent developments in string theory suggest that the string theory landscape of false vacua is vast. It is natural to ask if the landscape is as vast as allowed by consistent-looking effective field theories. Some authors (e.g. Cumrun Vafa[1]) suggest that is not the case and that the landscape is surrounded by an even larger swampland of consistent-looking semiclassical effective field theories, which are actually inconsistent. If there is a charge symmetry, that symmetry has to be a gauge symmetry, not a global one, and in the spectrum of charged particles, there has to be at least a particle with a mass in Planck units less than the gauge coupling strength. However, not all charged particles are necessarily light. That applies to magnetic monopoles as well. The sign of some higher order terms in the effective action is constrained by the absence of superluminal propagation. It has been shown that the swampland criteria are inconsistent with the idea of single-field slow-roll inflation given current cosmological data[3].

Black hole electron

In physics, there is a speculative hypothesis that if there were a black hole with the same mass, charge and angular momentum as an electron, it would share other properties of the electron. Most notably, Brandon Carter showed in 1968 that the magnetic moment of such an object would match that of an electron.[1] This is interesting because calculations ignoring special relativity and treating the electron as a small rotating sphere of charge give a magnetic moment that is off by roughly a factor of 2, the so-called gyromagnetic ratio. However, Carter's calculations also show that a would-be black hole with these parameters would be 'super-extremal'. Thus, unlike a true black hole, this object would display a naked singularity, meaning a singularity in spacetime not hidden behind an event horizon. It would also give rise to closed timelike curves. Standard quantum electrodynamics (QED), currently the most comprehensive theory of particles, treats the electron as a point particle.[2] There is no evidence that the electron is a black hole (or naked singularity). Furthermore, since the electron is quantum mechanical in nature,[3] any description purely in terms of general relativity is inadequate. Hence, the existence of a black hole electron remains strictly theoretical.

Bound state

In quantum physics, a bound state is a special quantum state of a particle subject to a potential such that the particle has a tendency to remain localised in one or more regions of space. The potential may be external or it may be the result of the presence of another particle; in the latter case, one can equivalently define a bound state as a state representing two or more particles whose interaction energy exceeds the total energy of each separate particle. One consequence is that, given a potential vanishing at infinity, negative-energy states must be bound. In general, the energy spectrum of the set of bound states is discrete, unlike free particles, which have a continuous spectrum. A proton and an electron can move separately; when they do, the total center-of-mass energy is positive, and such a pair of particles can be described as an ionized atom. Once the electron starts to "orbit" the proton, the energy becomes negative, and a bound state - namely the hydrogen atom - is formed. Only the lowest-energy bound state, the ground state, is stable. Other excited states are unstable and will decay into stable (but not other unstable) bound states with less energy by emitting a photon. A nucleus is a bound state of protons and neutrons (nucleons).

Rushbrooke inequality

In statistical mechanics, the Rushbrooke inequality relates the critical exponents of a magnetic system which exhibits a first-order phase transition in the thermodynamic limit for non-zero temperature T.

Equivalence principle

In the theory of general relativity, the equivalence principle is the equivalence of gravitational and inertial mass, and Albert Einstein's observation that the gravitational "force" as experienced locally while standing on a massive body (such as the Earth) is the same as the pseudo-force experienced by an observer in a non-inertial (accelerated) frame of reference.

Laser diode

Light source that uses semiconductor diode technology and optics to produce laser light A laser diode, (LD), injection laser diode (ILD), or diode laser is a semiconductor device similar to a light-emitting diode in which a diode pumped directly with electrical current can create lasing conditions at the diode's junction.[1]:3 Laser diodes can directly convert electrical energy into light. Driven by voltage, the doped p-n-transition allows for recombination of an electron with a hole. Due to the drop of the electron from a higher energy level to a lower one, radiation, in the form of an emitted photon is generated. This is spontaneous emission. Stimulated emission can be produced when the process is continued and further generate light with the same phase, coherence and wavelength. The choice of the semiconductor material determines the wavelength of the emitted beam, which in today's laser diodes range from infrared to the UV spectrum. Laser diodes are the most common type of lasers produced, with a wide range of uses that include fiber optic communications, barcode readers, laser pointers, CD/DVD/Blu-ray disc reading/recording, laser printing, laser scanning and light beam illumination. With the use of a phosphor like that found on white LEDs, Laser diodes can be used for general illumination. A laser diode is electrically a PIN diode. The active region of the laser diode is in the intrinsic (I) region, and the carriers (electrons and holes) are pumped into that region from the N and P regions respectively. While initial diode laser research was conducted on simple P-N diodes, all modern lasers use the double-hetero-structure implementation, where the carriers and the photons are confined in order to maximize their chances for recombination and light generation. Unlike a regular diode, the goal for a laser diode is to recombine all carriers in the I region, and produce light. Thus, laser diodes are fabricated using direct band-gap semiconductors. The laser diode epitaxial structure is grown using one of the crystal growth techniques, usually starting from an N doped substrate, and growing the I doped active layer, followed by the P doped cladding, and a contact layer. The active layer most often consists of quantum wells, which provide lower threshold current and higher efficiency.[1][page needed]

Quantitative easing (QE)

Quantitative easing (QE) is an expansion of the open market operations of a country's central bank. In the United States, the Federal Reserve is the central bank. QE is used to stimulate an economy by making it easier for businesses to borrow money.

quantum locking

Quantum locking occurs when a superconductor becomes trapped within a magnetic field. when this happens the superconductor will be locked in space and will not move without outside force. Now this is possible due to Meissner effect. ... U can see how does a superconductor behave when placed in a magnetic field.

Hawking radiation

Radiation predicted to arise from the evaporation of black holes. Hawking radiation reduces the mass and rotational energy of black holes and is therefore also known as black hole evaporation. Because of this, black holes that do not gain mass through other means are expected to shrink and ultimately vanish. As the radiation temperature is inversely proportional to the black hole's mass, micro black holes are predicted to be larger emitters of radiation than more massive black holes and should thus shrink and dissipate faster.[2]

spaceguard program

Spaceguard - a bargain insurance policy against killer asteroids. "Spaceguard" is an international effort to search the skies for large asteroids that might collide with Earth and devastate civilisation.

Surface tension

Surface tension is the tendency of liquid surfaces to shrink into the minimum surface area possible. Surface tension allows insects (e.g. water striders), usually denser than water, to float and slide on a water surface. At liquid-air interfaces, surface tension results from the greater attraction of liquid molecules to each other (due to cohesion) than to the molecules in the air (due to adhesion). The net effect is an inward force at its surface that causes the liquid to behave as if its surface were covered with a stretched elastic membrane. Thus, the surface comes under tension from the imbalanced forces, which is probably where the term "surface tension" came from.[1] Because of the relatively high attraction of water molecules to each other through a web of hydrogen bonds, water has a higher surface tension (72.8 millinewtons per meter at 20 °C) than most other liquids. Surface tension is an important factor in the phenomenon of capillarity. Surface tension has the dimension of force per unit length, or of energy per unit area. The two are equivalent, but when referring to energy per unit of area, it is common to use the term surface energy, which is a more general term in the sense that it applies also to solids. In materials science, surface tension is used for either surface stress or surface energy.

Néel temperature

The Néel temperature or magnetic ordering temperature, TN, is the temperature above which an antiferromagnetic material becomes paramagnetic—that is, the thermal energy becomes large enough to destroy the microscopic magnetic ordering within the material.[1]

Eagleworks Laboratories

The Advanced Propulsion Physics Laboratory or "Eagleworks Laboratories" at NASA's Johnson Space Center is a small research group investigating a variety of theories regarding new forms of spacecraft propulsion. The principal investigator is Dr. Harold G. White. The group is developing the White-Juday warp-field interferometer in the hope of observing small disturbances of spacetime and also testing small prototypes of thrusters that do not use reaction mass, with currently inconclusive results.[citation needed] The proposed principle of operation of these quantum vacuum plasma thrusters, such as the RF resonant cavity thruster ('EM Drive'),[1][2] has been shown to be inconsistent with known laws of physics, including conservation of momentum and conservation of energy. No plausible theory of operation for such drives has been proposed.

what causes tides? why do lakes not have tides?

The gravitational pinching of the moon and sun. Its similar to popping a pimple. You squeeze a big pimple and more fluid is displaced (oceans). When you squeeze a smaller one, not much happens (lakes, organisms, etc..) Lakes don't have major tides because the pinching is not enough to create a large enough pressure gradient to make a noticable difference. The ocean has tides because the pinching becomes very strong over a wider range of pressure being applied. Lakes actually do have tides, they are just microscopic. We as humans actually have tides, but again they are extremely small. As to why different areas experience variations in tide height, its due to the topology of the region i.e. the nooks and crannies cause the pressure that displaces the water to move more or less water. ___________ If the moon just pulled water towards it, then shouldn't all objects in the land (like cups and books) be pulled up as well? And why is there a tide on the opposite side of the earth as well? It's a bit more complicated than that. Basically, the moon is small compared to the earth so that a lot of area is exposed to the moon. That area is very big - so big, that there is a significant difference in how strong the moon pulls the earth on opposite sides of the earth. So if I'm under the moon, the moon will pull me stronger than someone on the opposite side of the earth. This is very simplified, but the moon's gravitational force acts upon the earth in many vectors, basically a force with a direction. The vectors all point in different directions, because they're on different points on the earth, facing the moon. It's complicated here, but you can add vectors. If the moon is facing the North Pole, then at the equator, those vectors are actually pointed inwards at the center of the earth. Likewise, if the moon is facing the equator, the vectors at the poles face inwards towards the center of the earth. This would make it seem like where the vectors (forces) point inwards the water should go inwards too right? But since there is the earth is in the way, the water moving "inwards" literally pushes other water away. The pushing of the water causes tides - the point the moon faces is where the pushing occurs, and it is low tide there. 90° away from the point the moon faces, it is high tide, because all the pushed water moves there. The earth is a sphere, so all the force vectors are symmetrical on opposite sides. That's why when high tide is at one place, the place in the opposite side of earth is also high tide. There are more complex details to this, but this is the basic. The atmosphere also has tides, but again, highly insignificant. The negligible effect is mainly masked by temperature swings.

Why light cannot escape from black holes?

The gravitational pull is so strong that any light that entered the event horizon cannot escape due to that light being red-shifted out of existence. Basically the high gravity causes the wavelength to stretch out to the point it is not even in the visible range anymore. Gravitational time dilation is also a thing. _______ Do they emit radio waves then? Or literally nothing can get past the escape velocity?

Holographic principle

The holographic principle is a tenet of string theories and a supposed property of quantum gravity that states that the description of a volume of space can be thought of as encoded on a lower-dimensional boundary to the region—such as a light-like boundary like a gravitational horizon. First proposed by Gerard 't Hooft, it was given a precise string-theory interpretation by Leonard Susskind[1] who combined his ideas with previous ones of 't Hooft and Charles Thorn.[1][2] As pointed out by Raphael Bousso,[3] Thorn observed in 1978 that string theory admits a lower-dimensional description in which gravity emerges from it in what would now be called a holographic way. The prime example of holography is the AdS/CFT correspondence. The holographic principle was inspired by black hole thermodynamics, which conjectures that the maximal entropy in any region scales with the radius squared, and not cubed as might be expected. In the case of a black hole, the insight was that the informational content of all the objects that have fallen into the hole might be entirely contained in surface fluctuations of the event horizon. The holographic principle resolves the black hole information paradox within the framework of string theory.[4] However, there exist classical solutions to the Einstein equations that allow values of the entropy larger than those allowed by an area law, hence in principle larger than those of a black hole. These are the so-called "Wheeler's bags of gold". The existence of such solutions conflicts with the holographic interpretation, and their effects in a quantum theory of gravity including the holographic principle are not fully understood yet.[5]

Percolation threshold

The percolation threshold is a mathematical concept in percolation theory that describes the formation of long-range connectivity in random systems. Below the threshold a giant connected component does not exist; while above it, there exists a giant component of the order of system size. In engineering and coffee making, percolation represents the flow of fluids through porous media, but in the mathematics and physics worlds it generally refers to simplified lattice models of random systems or networks (graphs), and the nature of the connectivity in them. The percolation threshold is the critical value of the occupation probability p, or more generally a critical surface for a group of parameters p1, p2, ..., such that infinite connectivity (percolation) first occurs.

what 64-bit processor means?

They are 2 of the most commonly used CPU architectures (x86/64 is a 64 bit processor that works with 32 bit words without needing entirely different instructions). In a nutshell its the length in binary of the data unit operated on by a CPU. Since every program needs to tell the CPU what to do, the program needs to speak its language. Thats where instruction sets come in - and they are specific to the CPU architecture in use. Therefore a 64 bit program could not run on a 32 bit CPU since it may be asking the CPU to perform a 64bit operation that it can't do! Because of this feature, a 32 bit machine can only address 4.3 billion (and change) bits of information resulting in a hard limit to the amount of memory that machine use. This means a 32 bit machine shouldn't have more than 4GB of RAM - anymore is impossible to access. 64 bit processors can address 9,223,372,036,854,775,807 bits, making the available memory available to that CPU practically limitless (I'd be willing to bet pieces of my anatomy there arent enough RAM sticks on earth to actually 'create' that much memory on a single machine).

Warp-field experiments

Warp-field experiments are a series of current and proposed experiments to create and detect instances of spacetime warping. The ultimate goal is to prove or disprove the possibility of spacetime metric engineering with reasonable amounts of energy. Spacetime metric engineering is a requirement for physically recreating solutions of general relativity such as Einstein-Rosen bridges or the Alcubierre drive. Current experiments focus on the Alcubierre metric and its modifications. Alcubierre's work from 1994 implies that even if the required exotic matter with negative energy densities can be created, the total mass-energy demand for his proposed warp drive would exceed anything that could be realistically attained by human technology.

how do submarines ventilate air?

When a submerged submarine "ventilates" it sticks a snorkel mast (a big vertical hollow tube) out of the water, sucks fresh air inside, and blows old air out into the water through another hollow tube. ... If the oxygen generator is broken, stored compressed oxygen can be released into the air.

Kaye effect

While pouring one viscous mixture of an organic liquid onto a surface, the surface suddenly spouted an upcoming jet of liquid which merged with the downgoing one. This phenomenon has since been discovered to be common in all shear-thinning liquids (liquids which thin under shear stress). Common household liquids with this property are liquid hand soaps, shampoos and non-drip paint. The effect usually goes unnoticed, however, because it seldom lasts more than about 300 milliseconds. The effect can be sustained by pouring the liquid onto a slanted surface, preventing the outgoing jet from intersecting the downward one (which tends to end the effect). It is thought to occur when the downgoing stream "slips" off the pile it is forming, and due to a thin layer of shear-thinned liquid acting as a lubricant, does not combine with the pile. When the slipping stream reaches a dimple in the pile, it will shoot off it like a ramp, creating the effect.

Widom scaling

Widom scaling (after Benjamin Widom) is a hypothesis in statistical mechanics regarding the free energy of a magnetic system near its critical point which leads to the critical exponents becoming no longer independent so that they can be parameterized in terms of two values. The hypothesis can be seen to arise as a natural consequence of the block-spin renormalization procedure, when the block size is chosen to be of the same size as the correlation length.[1]

Heterodyne

changed amplitude of wave; allowed voice patterns to be superimposed on waves Heterodyning is a signal processing technique invented by Canadian inventor-engineer Reginald Fessenden that creates new frequencies by combining or mixing two frequencies.[1][2][3] Heterodyning is used to shift one frequency range into another, new one, and is also involved in the processes of modulation and demodulation.[2][4] The two frequencies are combined in a nonlinear signal-processing device such as a vacuum tube, transistor, or diode, usually called a mixer.[2] In the most common application, two signals at frequencies f1 and f2 are mixed, creating two new signals, one at the sum f1 + f2 of the two frequencies, and the other at the difference f1 − f2.[3] These frequencies are called heterodynes. Typically only one of the new frequencies is desired, and the other signal is filtered out of the output of the mixer. Heterodyne frequencies are related to the phenomenon of "beats" in acoustics. A major application of the heterodyne process is in the superheterodyne radio receiver circuit, which is used in virtually all modern radio receivers.

DVD

digital versatile disc The MPEG encoder that creates the compressed movie file analyzes each frame and decides how to encode it. The compression uses some of the same technology as still image compression does to eliminate redundant or irrelevant data. It also uses information from other frames to reduce the overall size of the file. Each frame can be encoded in one of three ways: As an intraframe - An intraframe contains the complete image data for that frame. This method of encoding provides the least compression. As a predicted frame - A predicted frame contains just enough information to tell the DVD player how to display the frame based on the most recently displayed intraframe or predicted frame. This means that the frame contains only the data that relates to how the picture has changed from the previous frame. As a bidirectional frame - In order to display this type of frame, the player must have the information from the surrounding intraframe or predicted frames. Using data from the closest surrounding frames, it uses interpolation (something like averaging) to calculate the position and color of each pixel. DVD audio and DVD video are different formats. DVD audio discs and players are relatively rare right now, but they will become more common, and the difference in sound quality should be noticeable. In order to take advantage of higher-quality DVD audio discs, you will need a DVD player with a 192kHz/24-bit digital-to-analog converter (DAC). Most DVD players have only a 96kHz/24-bit digital-to-analog converter. So if you want to be able to listen to DVD audio discs, be sure to look for a DVD audio player with a 192kHz/24-bit digital-to-analog converter. DVD audio recordings can provide far better sound quality than CDs. The chart below lists the sampling rate and accuracy for CD recordings and the maximum sampling rate and accuracy for DVD recordings. CDs can hold 74 minutes of music. DVD audio discs can hold 74 minutes of music at their highest quality level, 192kHz/24-bit audio. By lowering either the sampling rate or the accuracy, DVDs can be made to hold more music. A DVD audio disc can store up to two hours of 6-channel, better than CD quality, 96kHz/24-bit music. Lower the specifications further, and a DVD audio disc can hold almost seven hours of CD-quality audio.

coincidentally

happening at the same time in a way that results from chance despite being very unlikely.

unpleasant

not enjoyable Something unpleasant is disagreeable, painful, or annoying in some way. No one likes unpleasant experiences.

Vapor pressure

the pressure exerted by a vapor over a liquid Vapor pressure (or vapour pressure in British English; see spelling differences) or equilibrium vapor pressure is defined as the pressure exerted by a vapor in thermodynamic equilibrium with its condensed phases (solid or liquid) at a given temperature in a closed system. The equilibrium vapor pressure is an indication of a liquid's evaporation rate. It relates to the tendency of particles to escape from the liquid (or a solid). A substance with a high vapor pressure at normal temperatures is often referred to as volatile. The pressure exhibited by vapor present above a liquid surface is known as vapor pressure. As the temperature of a liquid increases, the kinetic energy of its molecules also increases. As the kinetic energy of the molecules increases, the number of molecules transitioning into a vapor also increases, thereby increasing the vapor pressure.

How do dolphins sleep. If dolphins need air to breathe then how do they sleep underwater?

"While sleeping, the bottlenose dolphin shuts down only half of its brain, along with the opposite eye. The other half of the brain stays awake at a low level of alertness. This attentive side is used to watch for predators, obstacles and other animals. It also signals when to rise to the surface for a fresh breath of air. After approximately two hours, the animal will reverse this process, resting the active side of the brain and awaking the rested half."

Conspiracy theories are usually just an argument from ignorance. I don't understand how people could have built the pyramids with their level of technology so [insert dumb idea to explain it] god did it, aliens did it, or they must've had some other type of outside help we are unaware of.

/

Does a burnt piece of toast have the same number of calories as a regular piece of toast?

The easy answer is no. If you mean combustion (or burning) of the bread, then there would be less calories because once combustion occurs (even partial) the byproducts are either indigestible or barely so. If you mean dark toast, the kind you might get at 6 on the toaster, it has the same calories. The Maillard reaction is what drives browning and it is a complex process where proteins denature and bind to other proteins as well as carbohydrates and so forth created an amalgam of mixed molecules. Essentially this is what leads to that caramel/nuttiness you get when things are browned. However, this conformational change and denaturation does not decrease the calories because the overall building blocks are the same and still digestible. However, if let's say a byproduct of a Maillard reaction is an indigestible molecule that was previously digestible, you could argue that it is now lower in caloric value because it is no longer bioavailable energy. Side note, a lot of people are talking about measuring calories by using a bomb calorimeter aka burning the item. This is no longer the method used for finding caloric value of food. Instead they find the net average of Atwater bioavailable nutrients and then use standardized values (e.g. 4 Kcal/g for Carbohydrates) to calculate the assumed caloric value. Again, this is obviously dependent on bioavailable sources of energy, not overall stored energy. A perfect example of how a bomb calorimeter is not a feasible option, is Lettuce. Excluding the water (which is 95% of the material) lettuce is primarily fiber. Insoluble fiber in this case or in other words fiber we cannot breakdown (Cellulose). This material has no caloric value to us because it is not bioavailable (aside from small amounts created by gut fermentation thanks to helpful bacteria). So a piece of lettuce has a net caloric value of basically 0 in the Atwater system. In a bomb calorimeter however, it might have a much higher value because inside each of those cellulose walled cells is stored sugars, proteins, and so forth. Additionally, cellulose is essentially a starch made up of Beta-Glucose, however Beta-glucose is in a different conformation than Alpha-Glucose in starches we digest which means it is incompatible with our enzymes. However, combustion wise, cellulose and amylose (Alpha-glucose polysaccharide aka starch to most people) are equivalent in "Calories" in the context of a bomb calorimeter. Again, this is not the case in bioavailability. The only animals that can actually get the full caloric potential from plant material are foregut fermenters and hindgut fermenters, aka Cows and Horses. This is why they need multiple stomachs or a large cecum, in order to host helpful microorganisms to breakdown cellulose. Even Termites are not able to digest cellulose, but usually carry symbiotic organisms that can.

Ha! Okay, I'll bring the big league questions next time, you just get those neurons ready, sir :)

/

Mass is a property of matter. Think of it as the amount of energy something has.

/

Faint young Sun paradox

4 billion years ago, liquid water flowed on a warmer Earth when the fainter Sun, which produced only 70% of today's energy output The faint young Sun paradox or faint young Sun problem describes the apparent contradiction between observations of liquid water early in Earth's history and the astrophysical expectation that the Sun's output would be only 70 percent as intense during that epoch as it is during the modern epoch.[1] The issue was raised by astronomers Carl Sagan and George Mullen in 1972.[2] Proposed resolutions of this paradox have taken into account greenhouse effects, changes to planetary albedo, astrophysical influences, or combinations of these suggestions. Early in Earth's history, the Sun's output would have been only 70 percent as intense as it is during the modern epoch, owing to a higher ratio of hydrogen to helium in its core. Since then the Sun has gradually brightened and consequently warmed the Earth's surface, a process known as radiative forcing. During the Archaean age, assuming constant albedo and other surface features such as greenhouse gases, Earth's equilibrium temperature would have been too low to sustain a liquid ocean. Astronomers Carl Sagan and George Mullen pointed out in 1972 that this is contrary to the geological and paleontological evidence.[2]

Supervolcano

A supervolcano is a large volcano that has had an eruption with a Volcanic Explosivity Index (VEI) of 8, the largest recorded value on the index. This means the volume of deposits for that eruption is greater than 1,000 cubic kilometers (240 cubic miles). Supervolcanoes occur when magma in the mantle rises into the crust but is unable to break through it and pressure builds in a large and growing magma pool until the crust is unable to contain the pressure. This can occur at hotspots (for example, Yellowstone Caldera) or at subduction zones (for example, Toba). Large-volume supervolcanic eruptions are also often associated with large igneous provinces, which can cover huge areas with lava and volcanic ash. These can cause long-lasting climate change (such as the triggering of a small ice age) and threaten species with extinction. The Oruanui eruption of New Zealand's Taupo Volcano (about 26,500 years ago)[2] was the world's most recent VEI-8 eruption.

Golgi apparatus

A system of membranes that modifies and packages proteins for export by the cell The Golgi apparatus is a major collection and dispatch station of protein products received from the endoplasmic reticulum (ER). Proteins synthesized in the ER are packaged into vesicles, which then fuse with the Golgi apparatus. These cargo proteins are modified and destined for secretion via exocytosis or for use in the cell. In this respect, the Golgi can be thought of as similar to a post office: it packages and labels items which it then sends to different parts of the cell or to the extracellular space. The Golgi apparatus is also involved in lipid transport and lysosome formation. The structure and function of the Golgi apparatus are intimately linked. Individual stacks have different assortments of enzymes, allowing for progressive processing of cargo proteins as they travel from the cisternae to the trans Golgi face.[5][10] Enzymatic reactions within the Golgi stacks occur exclusively near its membrane surfaces, where enzymes are anchored. This feature is in contrast to the ER, which has soluble proteins and enzymes in its lumen. Much of the enzymatic processing is post-translational modification of proteins. For example, phosphorylation of oligosaccharides on lysosomal proteins occurs in the early CGN.[5] Cis cisterna are associated with the removal of mannose residues.[5][10] Removal of mannose residues and addition of N-acetylglucosamine occur in medial cisternae.[5] Addition of galactose and sialic acid occurs in the trans cisternae.[5] Sulfation of tyrosines and carbohydrates occurs within the TGN.[5] Other general post-translational modifications of proteins include the addition of carbohydrates (glycosylation)[12] and phosphates (phosphorylation). Protein modifications may form a signal sequence that determines the final destination of the protein. For example, the Golgi apparatus adds a mannose-6-phosphate label to proteins destined for lysosomes. Another important function of the Golgi apparatus is in the formation of proteoglycans. Enzymes in the Golgi append proteins to glycosaminoglycans, thus creating proteoglycans.[13] Glycosaminoglycans are long unbranched polysaccharide molecules present in the extracellular matrix of animals.

For humans, sea water is not drinkable due to its high salt content. How do whales, manatees, seals, and other seafaring mammals stay hydrated?

A. They get water from their food, and avoid salty food B. They may have modification to their kidneys to allow them to excrete more salt C. There a lot we don't know, marine animals are hard to study Fish do absorb water through their skin and gills in a process called osmosis. ... The opposite is true for saltwater fish. As well as getting water through osmosis, saltwater fish need to purposefully drink water in order to get enough into their systems.

Jonas Salk

Developed the polio vaccine in 1952 While most scientists believed that effective vaccines could only be developed with live viruses, Salk developed a "killed-virus" vaccine by growing samples of the virus and then deactivating them by adding formaldehyde so that they could no longer reproduce. By injecting the benign strains into the bloodstream, the vaccine tricked the immune system into manufacturing protective antibodies without the need to introduce a weakened form of the virus into healthy patients. Many researchers such as Polish-born virologist Albert Sabin, who was developing an oral "live-virus" polio vaccine, called Salk's approach dangerous. Sabin even belittled Salk as "a mere kitchen chemist." The hard-charging O'Connor, however, had grown impatient at the time-consuming process of developing a live-virus vaccine and put the resources of the March of Dimes behind Salk.

Extraterrestrial life

Alien life, such as microorganisms, has been hypothesized to exist in the Solar System and throughout the universe. This hypothesis relies on the vast size and consistent physical laws of the observable universe. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking,[6] as well as notable personalities such as Winston Churchill,[7][8] it would be improbable for life not to exist somewhere other than Earth.[9][10] This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth.[11] The chemistry of life may have begun shortly after the Big Bang, 13.8 billion years ago, during a habitable epoch when the universe was only 10-17 million years old. Life may have emerged independently at many places throughout the universe. Alternatively, life may have formed less frequently, then spread—by meteoroids, for example—between habitable planets in a process called panspermia.[14][15] In any case, complex organic molecules may have formed in the protoplanetary disk of dust grains surrounding the Sun before the formation of Earth.[16] According to these studies, this process may occur outside Earth on several planets and moons of the Solar System and on planets of other stars.[16]

H bridge

An H bridge is an electronic circuit that switches the polarity of a voltage applied to a load. These circuits are often used in robotics and other applications to allow DC motors to run forwards or backwards.

extended periodic table

An extended periodic table theorises about chemical elements beyond those currently known in the periodic table and proven up through oganesson, which completes the seventh period (row) in the periodic table at atomic number (Z) 118. The number of physically possible elements is unknown. A low estimate is that the periodic table may end soon after the island of stability,[15] which is expected to center on Z = 126, as the extension of the periodic and nuclides tables is restricted by the proton and the neutron drip lines and stability toward alpha decay and spontaneous fission.[77] One calculation by Y. Gambhir et al., analyzing nuclear binding energy and stability in various decay channels, suggests a limit to the existence of bound nuclei at Z = 146.[78] Some, such as Walter Greiner, predicted that there may not be an end to the periodic table.[16] Other predictions of an end to the periodic table include Z = 128 (John Emsley) and Z = 155 (Albert Khazan).[11] It is a "folk legend" among physicists that Richard Feynman suggested that neutral atoms could not exist for atomic numbers greater than Z = 137, on the grounds that the relativistic Dirac equation predicts that the ground-state energy of the innermost electron in such an atom would be an imaginary number. Here, the number 137 arises as the inverse of the fine-structure constant. By this argument, neutral atoms cannot exist beyond untriseptium (alternatively called "feynmanium"), and therefore a periodic table of elements based on electron orbitals breaks down at this point. However, this argument presumes that the atomic nucleus is pointlike. A more accurate calculation must take into account the small, but nonzero, size of the nucleus, which is predicted to push the limit further to Z ≈ 173.[16]

Comorbidity

In medicine, comorbidity is the presence of one or more additional conditions co-occurring with a primary condition; in the countable sense of the term, a comorbidity is each additional condition. The additional condition may also be a behavioral or mental disorder.

If a heart is a muscle, why doesn't it ever get tired of beating but things like my arms and legs do?

Couple of things first - cardiac muscle is fundamentally different compared to skeletal muscle. Although certain contractile proteins are similar, in terms of energetics (energy production and consumption), cardiomyocytes (heart cells) are very different. The main reason why cardiomyocytes are so resistant to fatigue is because they contain almost twice the amount of mitochondria. Mitochondria are the aerobic cellular powerhouse. We know this by looking at the content of citric synthase which tracks very well with mitochondrial content. The heart is very metabolically flexible in terms of fuel. It consumes glucose, free fatty acid and lactate. Yes.. you read that right. It consumes lactate (so does skeletal muscle). This is especially pronounced at high exercise intensities. Finally, cardiomyocytes are very well vascularized and since they have more mitochondria are incredibly good at extracting oxygen and using it for aerobic respiration. In fact, even at the rest heart muscle pretty much extracts most usable oxygen from blood which means the only way for the heart to improve oxygen delivery is to improve flow (as it cannot improve on extraction). These are just some broad concepts but I recommend taking a look at some exercise physiology texts that will help. One common question is - what would happen if we replaced all muscle in the body with cardiac muscle? Lots of bad things. Cardiac cells talk to each other through a structure called the intercalated disk which allows all cardiomyocytes to beat synchronously to produce an effective beat. Further, cardiomyocytes are self-excitatory i.e. they contract even without nerve supply (that's why a transplanted heart with its nerves cut still beats - a lot faster rate too because the heart needs the vagus nerve to rein it in). Obviously both of these would be very bad for skeletal muscle as these are incompatible with voluntary, purposeful movements. First, it's precisely because muscle fibers in a motor unit are isolated from others that we're capable of effective movements otherwise a contraction that started in one fiber would rapidly spread to others. Secondly, you can only imagine how bad it would be if skeletal muscle cells started to contract by themselves without any nervous system input. Ninja edit - oh and one more thing, the metabolic rate of cardiac tissue per unit mass is almost 35-times greater than skeletal muscle (440 kcal/kg per day vs 13 kcal/kg per day). If we replaced skeletal muscle with cardiac muscle our daily energetic needs would skyrocket given that skeletal muscle is a substantial percentage of our body mass/weight. does the heart only have a set number of beats and if I speed my heart up with exercise am I draining it? Not true and please don't stop exercising. If anything, exercise revs up your heart (esp cardio; weight training has very modest cardiac effects) during exercise and that's a good thing (if your heart rate doesn't go up that's bad and called chronotropic incompetence). That's because over time you get what's called eccentric hypertrophy so your heart can pump more blood out per beat (increased stroke volume). Further, regular cardio also increases vagal tone (the vagus is the nerve that slows the heart rate) and this in combination with the increased stroke volume means you get a very nice resting heart rate. Low resting heart rates and [high cardiorespiratory fitness] (https://mayocl.in/2WzpBok) VO2max (or VO2peak) are associated with significantly lower risks of heart disease and all-cause mortality.

Black mamba

fastest known snake Black mambas live in the savannas and rocky hills of southern and eastern Africa. They are Africa's longest venomous snake, reaching up to 14 feet in length, although 8.2 feet is more the average. They are also among the fastest snakes in the world, slithering at speeds of up to 12.5 miles per hour.

Dietary fiber

Dietary fiber (British spelling fibre) or roughage is the portion of plant-derived food that cannot be completely broken down by human digestive enzymes. It has two main components: Soluble fiber - which dissolves in water - is readily fermented in the colon into gases and physiologically active by-products, such as short-chain fatty acids produced in the colon by gut bacteria;[1] it is viscous, may be called prebiotic fiber, and delays gastric emptying which, in humans, can result in an extended feeling of fullness. Insoluble fiber - which does not dissolve in water - is inert to digestive enzymes in the upper gastrointestinal tract and provides bulking. Some forms of insoluble fiber, such as resistant starches, can be fermented in the colon. Bulking fibers absorb water as they move through the digestive system, easing defecation. Dietary fiber consists of non-starch polysaccharides and other plant components such as cellulose, resistant starch, resistant dextrins, inulin, lignins, chitins, pectins, beta-glucans, and oligosaccharides. Cellulose is an insoluble dietary fiber. Cellulose is an insoluble substance which is the main constituent of plant cell walls.

DHA

Docosahexaenoic acid is an omega-3 fatty acid that is a primary structural component of the human brain, cerebral cortex, skin, and retina. In physiological literature, it is given the name 22:6. It can be synthesized from alpha-linolenic acid or obtained directly from maternal milk, fish oil, or algae oil. DHA is commonly used for heart disease and high cholesterol. It is also used for boosting memory and thinking skills, for aiding infant and child development, for certain eye disorders, and many other conditions, but there is no good scientific evidence to support these uses.

LDL and HDL cholesterol

HDL helps rid your body of excess cholesterol so it's less likely to end up in your arteries. LDL is called "bad cholesterol" because it takes cholesterol to your arteries, where it may collect in artery walls. Too much cholesterol in your arteries may lead to a buildup of plaque known as atherosclerosis.

Peccei-Quinn theory

In particle physics, the Peccei-Quinn theory is a well-known proposal for the resolution of the strong CP problem. Peccei-Quinn theory predicts that the small value of the θ parameter is explained by a dynamic field, rather than a constant value. Because particles arise within quantum fields, Peccei-Quinn theory predicts the existence of a new particle, the axion. The potential which this field carries causes it to have a value which naturally cancels, making the θ parameter uneventfully zero. Peccei-Quinn symmetry presents θ as a functional component - a global U(1) symmetry under which a complex scalar field is charged. This symmetry is spontaneously broken by the vacuum expectation value obtained by this scalar field, and the axion is the massless Goldstone boson of this broken symmetry.

Hypolith

In Arctic and Antarctic ecology, a hypolith is a photosynthetic organism, and an extremophile, that lives underneath rocks in climatically extreme deserts such as Cornwallis Island and Devon Island in the Canadian high Arctic. The community itself is the hypolithon.

NC (complexity)

In complexity theory, the class NC (for "Nick's Class") is the set of decision problems decidable in polylogarithmic time on a parallel computer with a polynomial number of processors. In other words, a problem is in NC if there exist constants c and k such that it can be solved in time O(logc n) using O(nk) parallel processors. Stephen Cook[1][2] coined the name "Nick's class" after Nick Pippenger, who had done extensive research[3] on circuits with polylogarithmic depth and polynomial size.[4]

Nernst effect

In physics and chemistry, the Nernst effect is a thermoelectric phenomenon observed when a sample allowing electrical conduction is subjected to a magnetic field and a temperature gradient normal to each other. An electric field will be induced normal to both. Nernst equation is an important equation in electrochemistry in order to find cell voltage when conditions are not standard i.e 298K temperature,1 ATM pressure and 1 M concentration

Morphogenetic field

In the developmental biology of the early twentieth century, a morphogenetic field is a group of cells able to respond to discrete, localized biochemical signals leading to the development of specific morphological structures or organs.[1][2] The spatial and temporal extents of the embryonic field are dynamic, and within the field is a collection of interacting cells out of which a particular organ is formed.[3] As a group, the cells within a given morphogenetic field are constrained: thus, cells in a limb field will become a limb tissue, those in a cardiac field will become heart tissue.[4] However, specific cellular programming of individual cells in a field is flexible: an individual cell in a cardiac field can be redirected via cell-to-cell signaling to replace specific damaged or missing cells.[4] Imaginal discs in insect larvae are examples of morphogenetic fields.[5] Pseudoscientific something or another.

Mass generation

In theoretical physics, a mass generation mechanism is a theory that describes the origin of mass from the most fundamental laws of physics. Physicists have proposed a number of models that advocate different views of the origin of mass. The problem is complicated because the primary role of mass is to mediate gravitational interaction between bodies, and no theory of gravitational interaction reconciles with the currently popular Standard Model of particle physics. There are two types of mass generation models: gravity-free models and models that involve gravity.

Composite gravity

In theoretical physics, composite gravity refers to models that attempted to derive general relativity in a framework where the graviton is constructed as a composite bound state of more elementary particles, usually fermions. A theorem by Steven Weinberg and Edward Witten shows that this is not possible in Lorentz covariant theories: massless particles with spin greater than one are forbidden. The AdS/CFT correspondence may be viewed as a loophole in their argument. However, in this case not only the graviton is emergent; a whole spacetime dimension is emergent, too.[1]

Is it possible to Yo-Yo in space?

It is indeed possible to yo-yo in space. The only thing is that if you "free wheel it" (sorry not a yo-yo expert) it tends to float around. It will however try to keep its orientation due to gyroscopic effects. This is sometime used on spacecraft to either stabilise them or to turn them (with moment gyros). https://www.youtube.com/watch?v=ni4j5K4Lz3o

If each day is only 23h56m4s, over the course of 4 years, we accumulate 95.7 hours of unaccounted time when approximating each day to 24 hours. We give ourselves one extra day in February, which accounts for only 24 hours of that extra time, but where does that extra 71.7 hours go?

It sounds like you're confusing sidereal time with solar time. A sidereal day is the amount of time it takes for the Earth to rotate by 360°, and is indeed 23h56m4s. A solar day is the interval between two successive instances of the Sun crossing the local meridian. Since the Earth moves by roughly 1° around the Sun each day, the Earth has to rotate by roughly 361° for the Sun to cross the local meridian. In this image, sidereal time is the difference between the blue circles labelled "1" and "2", where solar time is the difference between the blue circles labelled "1" and "3".

Ages 1 to 4 are very important for brain development but yet most people can't recall anything from that time period. Why don't we remember our earliest memories?

It's not that we forget our earliest years, it's that we don't form memories in the first place. The term for this is infantile "amnesia", but this is not actually a form of amnesia — that would require forgetting. As infants grow into toddlers, their brains grow fantastically quickly. So much so, that any pathways that are deemed unimportant/weak are "pruned". Pruning is the technical term actually. By age three, pruning calms down to the point where toddlers can start forming the memories they'll remember for possibly the rest of their lives. However, usually the very earliest memories are traumatic or notable in some way. Pruning continues for some time into childhood. Maybe age 4.

Since light stops penetrating water at 1000 meters deep and the deepest freshwater lake is 1642 meters deep(both according to Google), is there an equivalent to deep sea creatures for freshwater?

Lake Baikal (deepest freshwater lake) appears to have complex life forms at it's greatest depths. "Baikal is also home to the world's most abyssal freshwater fish. These fish have managed to preserve eyesight even at the greatest depths, although they see only in black and white." Also the golomyanka (oil fish) "can endure most pressure in the depths of the Baikal water. At night it rises to the water surface, and at daytime it swims down to great depths. Limnologists have had a chance to observe the golomynka's behaviour in the water depths. At a depth of 1,000-1,400 metres and more, the golomyanka moves freely both horizontally and vertically, whereas at such a depth even a cannon cannot shoot because of the enormous pressure."

M-theory

M-theory is a theory in physics that unifies all consistent versions of superstring theory. Edward Witten first conjectured the existence of such a theory at a string-theory conference at the University of Southern California in the spring of 1995. Witten's announcement initiated a flurry of research activity known as the second superstring revolution. small loops of rapidly vibrating string-like matter.

If hand sanitizer kills 99.99% of germs, then won't the surviving 0.01% make hand sanitizer resistant strains?

Most hand sanitizers use alcohol, which kills indiscriminately. It would kill us if we didn't have livers to filter it, and in high enough doses will kill anyway. Some germs survive due to randomly being out of contact, in nooks and crannies and such, not due to any mechanism that might be selected for. Let's say you throw 1000 humans into a volcano. One of them happens to land on a ledge inside the volcano and escapes. If he has kids, they will not be volcano resistant.

If we return to the moon, is there a telescope on earth today strong enough to watch astronauts walking around on the surface?

No, I don't think any telescope could come close. For instance, Hubble has an angular resolution of about 1/10 of an arcsecond. It is approximately 384,000km from the moon. 1/10 arcsecond is 1/36000 of a degree, and a circle is 360 degrees. 1/10 arcsecond on a circle with radius 384000 km is: 2 * 384000 * pi / 360 / 36000 = 0.18617 So the resolution of Hubble would be 186m, much too large to make out a single human. To achieve the sub-1m resolution needed to discern a person on the moon, a telescope would need a resolution over 100 times better, which does not exist. - So the resolution of Hubble would be 186m per pixel. In case someone thinks that's the width covered by the image.

Why are Primates incapable of Human speech, while lesser animals such as Parrots can emulate Human speech?

Non-human primates lack the neurological regions responsible for producing speech as well as the musculature in the throat. There are several theories of how language and learned vocalizations evolved in humans, songbirds, parrots, bats, and cetaceans (whales, dolphins), but a general consensus is that it arose independently several times. Some of my favorite neuroscientists who write about this are Erich Jarvis and Johan Bolhuis. Both are songbird researchers. Jarvis has a three part series on YouTube about this if you want to learn more. I haven't watched it but have seen him lecture a few times and he does a great job explaining it. Also, I wouldn't refer to parrots as lesser animals in terms of intelligence. Corvids and parrots have exhibited a wide range of intelligent behaviors that was once considered only available to humans and some other apes such as tool use and recursive learning. A recent study has shown that the density of neurons in birds' brains, especially parrots and songbirds, are comparable to humans and primates.

If we could travel at 99.9% the speed of light, it would take 4 years to get to Alpha Centauri. Would the people on the spaceship feel like they were stuck on board for 4 years or would it feel shorter for them?

Nope, time dilation is a cool part of special relativity. At 99.9% the speed of light the trip would be 0.17 ish years to the occupants of the spaceship. The closer you get to 100%, ie. add more 9's to the end of your percentage, the faster the trip would feel to the occupants. Here's a cool calculator site you can play with to see these effects.

norepinephrine

Norepinephrine is a naturally occurring chemical in the body that acts as both a stress hormone and neurotransmitter (a substance that sends signals between nerve cells). It's released into the blood as a stress hormone when the brain perceives that a stressful event has occurred. The general function of norepinephrine is to mobilize the brain and body for action. Norepinephrine release is lowest during sleep, rises during wakefulness, and reaches much higher levels during situations of stress or danger, in the so-called fight-or-flight response. In the brain, norepinephrine increases arousal and alertness, promotes vigilance, enhances formation and retrieval of memory, and focuses attention; it also increases restlessness and anxiety. In the rest of the body, norepinephrine increases heart rate and blood pressure, triggers the release of glucose from energy stores, increases blood flow to skeletal muscle, reduces blood flow to the gastrointestinal system, and inhibits voiding of the bladder and gastrointestinal motility.

Omega-6 fatty acid

Omega-6 fatty acids (also referred to as ω-6 fatty acids or n-6 fatty acids) are a family of polyunsaturated fatty acids that have in common a final carbon-carbon double bond in the n-6 position, that is, the sixth bond, counting from the methyl end. Linoleic acid (18:2, n−6), the shortest-chained omega-6 fatty acid is categorized as an essential fatty acid because the human body cannot synthesize it. Mammalian cells lack the enzyme omega-3 desaturase and therefore cannot convert omega-6 fatty acids to omega-3 fatty acids. Closely related omega-3 and omega-6 fatty acids act as competing substrates for the same enzymes.[2] This outlines the importance of the proportion of omega-3 to omega-6 fatty acids in a diet. Omega-6 fatty acids are precursors to endocannabinoids, lipoxins, and specific eicosanoids. Medical research on humans found a correlation (though correlation does not imply causation) between the high intake of omega-6 fatty acids from vegetable oils and disease in humans. However, biochemistry research has concluded that air pollution, heavy metals, smoking, passive smoking, lipopolysaccharides, lipid peroxidation products (found mainly in vegetable oils, roasted/rancid nuts and roasted/rancid oily seeds) and other exogenous toxins initiate the inflammatory response in the cells which leads to the expression of the COX-2 enzyme and subsequently to the temporary production of inflammatory promoting prostaglandins from arachidonic acid for the purpose of alerting the immune system of the cell damage and eventually to the production of anti-inflammatory molecules (e.g. lipoxins & prostacyclin) during the resolution phase of inflammation, after the cell damage has been repaired.

Omega-3 fatty acids

Omega−3 fatty acids, also called Omega-3 oils, ω−3 fatty acids or n−3 fatty acids,[1] are polyunsaturated fatty acids (PUFAs) characterized by the presence of a double bond three atoms away from the terminal methyl group in their chemical structure. They are widely distributed in nature, being important constituents of animal lipid metabolism, and they play an important role in the human diet and in human physiology. The three types of omega−3 fatty acids involved in human physiology are α-linolenic acid (ALA), found in plant oils, and eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), both commonly found in marine oils.[2] Marine algae and phytoplankton are primary sources of omega−3 fatty acids. Common sources of plant oils containing ALA include walnut, edible seeds, clary sage seed oil, algal oil, flaxseed oil, Sacha Inchi oil, Echium oil, and hemp oil, while sources of animal omega−3 fatty acids EPA and DHA include fish, fish oils, eggs from chickens fed EPA and DHA, squid oils, krill oil, and certain algae. Mammals are unable to synthesize the essential omega−3 fatty acid ALA and can only obtain it through diet. However, they can use ALA, when available, to form EPA and DHA, by creating additional double bonds along its carbon chain (desaturation) and extending it (elongation). Namely, ALA (18 carbons and 3 double bonds) is used to make EPA (20 carbons and 5 double bonds), which is then used to make DHA (22 carbons and 6 double bonds).[4] The ability to make the longer-chain omega−3 fatty acids from ALA may be impaired in aging.[5] In foods exposed to air, unsaturated fatty acids are vulnerable to oxidation and rancidity.[6] Dietary supplementation with omega−3 fatty acids does not appear to affect the risk of death, cancer or heart disease.[7][8] Furthermore, fish oil supplement studies have failed to support claims of preventing heart attacks or strokes or any vascular disease outcomes.

If bruises are from bleeding underneath the skin, where does all the blood go when it heals?

Part of the leaked blood will coagulate and help to form the clot / scab to stop the bleeding. Macrophages will come and clean up any "loose" blood, or debris, or old scab material - they take it in, digest it, and spit the digested remains out into the blood stream where it's filtered out in the kidneys. The iron (I think) attaches to iron-transporting-proteins that are floating around in the blood, and eventually makes its way back to the red bone marrow where it's used to make new hemoglobin. Why do macrophages come along and clean things up? What are their motivations and dreams in their life? Their motivation is an increasing chemogradient of various chemical factors including interleukons, cytokines, disrupted lipoproteins from cellular membranes, and other chemotaxic molecules. They quite literally will follow the strongest gradient to its origin and boom perfect marriage. Which actually works out well for the tissue but the macrophages kinda get screwed but that's a different story.

Polar drift

Polar drift is a geological phenomenon caused by variations in the flow of molten iron in Earth's outer core, resulting in changes in the orientation of Earth's magnetic field, and hence the position of the magnetic north- and south poles. The North Magnetic Pole is approximately 965 kilometres (600 mi) from the geographic north pole. The pole drifts considerably each day, and since 2007 it moves about 55 to 60 km (34 to 37 mi) per year as a result of this phenomenon. The South Magnetic Pole is constantly shifting due to changes in the Earth's magnetic field. As of 2005 it was calculated to lie at 64°31′48″S 137°51′36″E,[2] placing it off the coast of Antarctica, between Adelie Land and Wilkes Land.

If nuclear waste will still be radioactive for thousands of years, why is it not usable?

Radioactivity, by itself, is not that useful for generating power. What is useful for generating power is the induced splitting of lots of atoms at the same time, not the slow trickle of energy release you get from radioactive decay alone. To put it another way: nuclear reactors don't work because their fuel is radioactive, they work because their fuel is splittable by neutrons. Those are not the same thing (all fuel splittable by neutrons is radioactive, but not all radioactive atoms are splittable by neutrons). To add to this, radioactive decay CAN be used as energy source in radioisotope thermal generators, where the heat from decaying atoms is used to generate electricity. Satellites use these for low power generation. Do they use recycled nuclear waste or does it require more potent material? If I remember correctly many satellites use plutonium, a by-product from making uranium for nuclear bombs. Not necessarily a by-product. Rather, non-fissile Uranium-238 is 'enriched' in a reactor so it turns into plutonium via beta decay. It's usually the intended product, not leftovers. Small amounts can be produced in normal reactors, with the small amounts of 238 left after purifying the raw uranium. But it is probably easier to take it from a nuclear arsenal than slowly gathering it from the nuclear waste with chemistry. I may be wrong though, corrections are welcome.

Skyquake

Skyquakes[1] are unexplained reports of a phenomenon that sounds like a cannon, trumpet or a sonic boom coming from the sky. They have been heard in several locations around the world such as the banks of the river Ganges in India, the East Coast and inland Finger Lakes of the United States, the Magic Valley in South Central Idaho of the United States. Their sound has been described as being like distant but inordinately loud thunder while no clouds are in the sky large enough to generate lightning. Those familiar with the sound of cannon fire say the sound is nearly identical. The booms occasionally cause shock waves that rattle plates. Early white settlers in North America were told by the native Haudenosaunee Iroquois that the booms were the sound of the Great Spirit continuing his work of shaping the earth. _________________ Hypotheses Coronal mass ejection CMEs often generate shock waves similar to what happens when an aircraft breaks the sound barrier in Earth's atmosphere. The solar wind's equivalent of a sonic boom can accelerate protons up to millions of miles per minute—as much as 40 percent of the speed of light. Meteors entering the atmosphere causing sonic booms.

Why do we have to "fall" asleep? Why can't we just decide to be asleep?

Sleeping literally changes our very physiology. Our core body temperature drops which allows certain proteins to work differently than they do during our "waking temp," as a broad example. It's not something we'd want easy control over. Most importantly the process of getting sleepy is highly regulated by not only our Circadian rhythm but also by other hormone systems. We need to burn energy to feel fatigued (when we use ATP and make Adenosine as a byproduct, which signals fatigue in humans). We need a lack of blue-wavelength light to initiate the process of releasing melatonin at night, which makes us sleepy and helps initiate the sleeping-end of our Circadian processes. We don't have voluntary control over sleep because it's chemically regulated. Adenosine, melatonin, hypercretin (Orexin), etc.. It's not something we can flex like a muscle. It's essentially hormonal in nature and therefore requires us to use drugs (meaning ligands that bind to targets in our body) to control it.

A wave is typically measured by frequency and amplitude. What aspects of gravity do these two properties affect, and are these aspects explainable/understandable to non-physicists?

So in order to make gravitational waves you need to shake something really massive really fast. In the case of two inspiraling black holes, the amplitude is related to how hard they are accelerating in their orbit, and the frequency is related to the period of the orbit. This is why inspiraling binaries have a gravitational wave 'chirp' - as they come closer in their orbit the frequency increases as they orbit faster and faster, and the amplitude increases as well. If a wave passes through you, it will strain you a bit, effectively squeezing and stretching you. The amount of the squeeze is related to the amplitude, the frequency of the wave is just the frequency of the squeezing. It's this tiny wavey squeezing that LIGO was designed to measure.

Core-mantle boundary

The core-mantle boundary (CMB in the parlance of solid earth geophysicists) of the Earth lies between the planet's silicate mantle and its liquid iron-nickel outer core. This boundary is located at approximately 2891 km (1796 mi) depth beneath the Earth's surface. The boundary is observed via the discontinuity in seismic wave velocities at that depth due to the differences between the acoustic impedances of the solid mantle and the molten outer core. P-wave velocities are much slower in the outer core than in the deep mantle while S-waves do not exist at all in the liquid portion of the core. Recent evidence suggests a distinct boundary layer directly above the CMB possibly made of a novel phase of the basic perovskite mineralogy of the deep mantle named post-perovskite. Seismic tomography studies have shown significant irregularities within the boundary zone and appear to be dominated by the African and Pacific large low-shear-velocity provinces (LLSVPs).[1]

Chandler wobble

The Chandler wobble or variation of latitude is a small deviation in the Earth's axis of rotation relative to the solid earth,[1] which was discovered by American astronomer Seth Carlo Chandler in 1891. It amounts to change of about 9 metres (30 ft) in the point at which the axis intersects the Earth's surface and has a period of 433 days.[2][3] This wobble, which is a nutation, combines with another wobble with a period of one year, so that the total polar motion varies with a period of about 7 years. The Chandler wobble is an example of the kind of motion that can occur for a spinning object that is not a sphere; this is called a free nutation. Somewhat confusingly, the direction of the Earth's spin axis relative to the stars also varies with different periods, and these motions—caused by the tidal forces of the Moon and Sun—are also called nutations, except for the slowest, which are precessions of the equinoxes.

Greisen-Zatsepin-Kuzmin limit

The Greisen-Zatsepin-Kuzmin limit (GZK limit) is a theoretical upper limit on the energy of cosmic ray protons traveling from other galaxies through the intergalactic medium to our galaxy. The limit is 5×1019 eV, or about 8 joules (the energy of a proton travelling at ~99.99999999999999999998% the speed of light). The limit is set by slowing interactions of the protons with the microwave background radiation over long distances (~160 million light-years). The limit is at the same order of magnitude as the upper limit for energy at which cosmic rays have experimentally been detected. For example, one extreme-energy cosmic ray, the Oh-My-God Particle, which has been found to possess a record-breaking 3.12×1020 eV (50 joules)[1][2] of energy (about the same as the kinetic energy of a 95 km/h baseball). The GZK limit is derived under the assumption that ultra-high energy cosmic rays are protons. Measurements by the largest cosmic-ray observatory, the Pierre Auger Observatory, suggest that most ultra-high energy cosmic rays are heavier elements.[3] In this case, the argument behind the GZK limit does not apply in the originally simple form, and there is no fundamental contradiction in observing cosmic rays with energies that violate the limit.

If we could drain the ocean, could we breath or live on the deepest parts or would pressures, temperatures, and oxygen levels be too extreme for us to live such as high altitudes?

The Mariana trench is about 11,000 meters below sea level. If you built a cofferdam down to that depth, as u/SirHerald suggests, the pressure would be about 3.2 atm. Divers experience a similar pressure at a depth of around 22 meters, so humans can survive these pressures in the short term. In the long term, I found this chart from NASA suggesting you would have to worry about mild nitrogen narcosis, making you feel slightly drunk, and mild oxygen toxicity. So you probably wouldn't want to live at that pressure, but a short trip down wouldn't be any worse than diving. But, your question was about draining the oceans. The oceans cover 72% of the surface of the earth, and the average depth of the ocean is about 3700 meters. That means a large portion of the atmosphere will sink into the ocean, resetting 1 atm of pressure to a depth of something like 2600 meters below sea level. So that would make the Mariana trench a bit more tolerable at around 2.5 atm, but it would also mean some high altitude cities would become nearly uninhabitable. ________ How in the world did you calculate the effect drained oceans would have on changing overall atmospheric pressure? You can see the relevant numbers in his post. Average depth and coverage area. One thing not covered is how the weight of the water affects the topography. Without it things would level out a bit. I assume that would be a very messy and unpleasant process? There would be a lot of earthquakes if all of the Earth's oceans suddenly disappeared Not to mention the fact that phytoplankton contribute anywhere from 50 to 85 percent of the oxygen in our atmosphere. Earthquakes would be the least of our concerns in the long term. The way that the oceans, rain forests, glaciers hell even the saline content are balanced and interact is very fragile as it is.

suprachiasmatic nucleus (SCN)

The suprachiasmatic nucleus or nuclei (SCN) is a tiny region of the brain in the hypothalamus, situated directly above the optic chiasm. It is responsible for controlling circadian rhythms. The neuronal and hormonal activities it generates regulate many different body functions in a 24-hour cycle. The mouse SCN contains approximately 20,000 neurons. The SCN interacts with many other regions of the brain. It contains several cell types and several different peptides (including vasopressin and vasoactive intestinal peptide) and neurotransmitters.

Messinian salinity crisis

The Messinian Salinity Crisis (MSC), also referred to as the Messinian Event, and in its latest stage as the Lago Mare event, was a geological event during which the Mediterranean Sea went into a cycle of partly or nearly complete desiccation throughout the latter part of the Messinian age of the Miocene epoch, from 5.96 to 5.33 Ma (million years ago). It ended with the Zanclean flood, when the Atlantic reclaimed the basin. Sediment samples from below the deep seafloor of the Mediterranean Sea, which include evaporite minerals, soils, and fossil plants, show that the precursor of the Strait of Gibraltar closed tight about 5.96 million years ago, sealing the Mediterranean off from the Atlantic. This resulted in a period of partial desiccation of the Mediterranean Sea, the first of several such periods during the late Miocene.[5] After the strait closed for the last time around 5.6 Ma, the region's generally dry climate at the time dried the Mediterranean basin out nearly completely within a thousand years. This massive desiccation left a deep dry basin, reaching 3 to 5 km (1.9 to 3.1 mi) deep below normal sea level, with a few hypersaline pockets similar to today's Dead Sea. Then, around 5.5 Ma, less dry climatic conditions resulted in the basin receiving more freshwater from rivers, progressively filling and diluting the hypersaline lakes into larger pockets of brackish water (much like today's Caspian Sea). The Messinian Salinity Crisis ended with the Strait of Gibraltar finally reopening 5.33 Ma, when the Atlantic rapidly filled up the Mediterranean basin in what is known as the Zanclean flood. Even today, the Mediterranean is considerably saltier than the North Atlantic, owing to its near isolation by the Strait of Gibraltar and its high rate of evaporation. If the Strait of Gibraltar closes again (which is likely to happen in the near future on a geological time scale), the Mediterranean would mostly evaporate in about a thousand years, after which continued northward movement of Africa may obliterate the Mediterranean altogether.

Oh-My-God particle

The Oh-My-God particle was the highest-energy cosmic ray detected at the time (15 October 1991) by the Fly's Eye detector in Dugway Proving Ground, Utah, US.[1][2][3] Its energy was estimated as (3.2±0.9)×10^20 eV, or 51 J. This is 20 million times more energetic than the highest energy measured in electromagnetic radiation emitted by an extragalactic object[4] and 10^20 (100 quintillion) times the photon energy of visible light, equivalent to a 142-gram (5 oz) baseball travelling at about 26 m/s (94 km/h; 58 mph). Although higher energy cosmic rays have been detected since then, this particle's energy was unexpected, and called into question theories of that era about the origin and propagation of cosmic rays. Assuming it was a proton, this particle traveled at 99.99999999999999999999951% of the speed of light, its Lorentz factor was 3.2×10^11 and its rapidity was 27.1. At this speed, if a photon were travelling with the particle, it would take over 215,000 years for the photon to gain a 1 cm lead as seen in Earth's reference frame.

Alpha effect

The alpha effect refers to the increased nucleophilicity of an atom due to the presence of an adjacent (alpha) atom with lone pair electrons.[1] This first atom does not necessarily exhibit increased basicity compared with a similar atom without an adjacent electron donating atom. The effect is well established with many theories to explain the effect but without a clear winner. The effect was first observed by Jencks and Carriuolo in 1960[2][3] in a series of chemical kinetics experiments involving the reaction of the ester p-nitrophenyl acetate with a range of nucleophiles. Regular nucleophiles such as the fluoride anion, aniline, pyridine, ethylene diamine and the phenolate ion were found to have pseudo first order reaction rates corresponding to their basicity as measured by their pKa. Other nucleophiles however reacted much faster than expected based on this criterion alone. These include hydrazine, hydroxylamine, the hypochlorite ion and the hydroperoxide anion.

how big is a whale heart?

The blue whale's heart is huge! It is about 5 feet long, 4 feet wide, and 5 feet tall and weighs around 400 pounds; its beat can be detected from two miles away

Coronal heating problem

The coronal heating problem in solar physics relates to the question of why the temperature of the Sun's corona is millions of kelvin higher than that of the surface. Several theories have been proposed to explain this phenomenon but it is still challenging to determine which of these is correct.[24] The problem first emerged when Bengt Edlen and Walter Grotrian identified Fe IX and Ca XIV lines in the solar spectrum.[25] This led to the discovery that the emission lines seen during solar eclipses are not caused by an unknown element called "coronium" but known elements at very high stages of ionization.[24] The comparison of the coronal and the photospheric temperatures of 6000K, leads to the question of how the 200 times hotter coronal temperature can be maintained.[25] The problem is primarily concerned with how the energy is transported up into the corona and then converted into heat within a few solar radii.[26]

Hard problem of consciousness

The problem of determining how physiological processes, such as ion flow across nerve membranes, cause different perceptual experiences. The hard problem of consciousness is the problem of explaining why and how sentient organisms have qualia[note 1] or phenomenal experiences—how and why it is that some internal states are subjective, felt states, such as heat or pain, rather than merely nonsubjective, unfelt states, as in a thermostat or a toaster.[2] The philosopher David Chalmers, who introduced the term "hard problem" of consciousness,[3] contrasts this with the "easy problems" of explaining the ability to discriminate, integrate information, report mental states, focus attention, and so forth.

Piezophile

The inhabitants of the piezosphere, aptly named piezophiles, are sub-classified according to the pressure range they tolerate. The most extreme hyperpiezophiles can tolerate pressures up to 130 MPa, or 1,300 x atmospheric! In the deep ocean, temperatures range from near freezing (2-3 ºC) to piping hot (> 400 ºC near hydrothermal vents). Piezophiles occupy almost the entire spectra of temperatures known to support life (with the exception of some very cold loving bacteria buried in ice and permafrost). Accordingly, scientists further classify piezophiles based on their temperature preference. A piezophile (from Greek "piezo-" for pressure and "-phile" for loving) is an organism with optimal growth under high hydrostatic pressure or, more operationally, an organism that have its maximum rate of growth at a hydrostatic pressure equal or above 10 MPa (= 99 atm = 1,450 psi), when tested over all permissible temperatures.[1] Originally, the term barophile was used for these organisms, but since the prefix "baro-" stands for weight, the term piezophile should be given preference.[2] Like all definitions of extremophiles, the definition of piezophiles is anthropocentric, and humans consider that moderate values for hydrostatic pressure are those around 1 atm (= 0.1 MPa = 14.7 psi). Hyperpiezophile is defined as an organism that have their maximum rate of growth above 50 MPa (= 493 atm = 7,252 psi). The current record for highest hydrostatic pressure where growth was observed is 130 MPa (= 1,283 atm = 18,855 psi), by the archaea Thermococcus piezophilus.[4] Obligate piezophiles refers to organisms that are unable to grow under lower hydrostatic pressures, such as 0.1 MPa. In contrast, piezotolerant organisms are those that have their maximum rate of growth at a hydrostatic pressure under 10 MPa, but that nevertheless are able to grow at lower rates under higher hydrostatic pressures. Most of the Earth's biosphere (in terms of volume) is subject to high hydrostatic pressure, and the piezosphere comprises the deep sea (at the depth of 1,000 m and greater) plus the deep subsurface (which can extend up to 5,000 m beneath the seafloor or the continental surface).[3][5] The deep sea has a mean temperature around 1 to 3 °C, and it is dominated by psychrothermophiles, in contrast to the deep subsurface and hydrothermal vents in the seafloor, which are dominated by thermopiezophiles that prosper on temperatures above 45 °C (113 °F).

Why is the human nose the shape it is? Why isn't it just two holes in our face?

The nose is actually an organ that performs many functions aside from just being a conduit to the respiratory and olfactory system. The nose warms up and moistens air before it enters the trachea in the winter and has an air conditioning effect in the summer. The nose is the first line of defense before foreign particles can be inhaled into the respiratory tract - large particles are captured by hairs and smaller ones get caught in mucus. In that way, it can be thought of as an organ of the immune system, since it is responsible for draining the sinuses. The endings of the olfactory nerves go through the roof of the nasal cavity; if there was no protection and just open orifices on the front of the face, you would basically have very little between raw exposed cranial nerve endings and the outside world. Lastly, the shape of the nose changes vocal resonance and affects your voice. I think you may have missed mentioning the most obvious reason. The nose primarily functions to keep water out of the respiratory passages. We can't close our nose like we can our mouth, but the shape of the nose, the position and size of the nostrils, and nostril hairs themselves (which are hydrophobic), all serve to block water from entering the nasal passages. Snub-nosed monkeys have problems with rain getting in their flat faces so they sneeze a lot during storms.

Protein folding

The physical process by which a polypeptide folds into its characteristic three-dimensional structure, which is essential to the protein's function. Protein folding is the physical process by which a protein chain acquires its native 3-dimensional structure, a conformation that is usually biologically functional, in an expeditious and reproducible manner. It is the physical process by which a polypeptide folds into its characteristic and functional three-dimensional structure from a random coil.[1] Each protein exists as an unfolded polypeptide or random coil when translated from a sequence of mRNA to a linear chain of amino acids. This polypeptide lacks any stable (long-lasting) three-dimensional structure (the left hand side of the first figure). As the polypeptide chain is being synthesized by a ribosome, the linear chain begins to fold into its three-dimensional structure. Folding begins to occur even during translation of the polypeptide chain. Amino acids interact with each other to produce a well-defined three-dimensional structure, the folded protein (the right hand side of the figure), known as the native state. The resulting three-dimensional structure is determined by the amino acid sequence or primary structure (Anfinsen's dogma).[2]

Circadian rhythm

The primary circadian clock in mammals is located in the suprachiasmatic nucleus (or nuclei) (SCN), a pair of distinct groups of cells located in the hypothalamus. Destruction of the SCN results in the complete absence of a regular sleep-wake rhythm. The SCN receives information about illumination through the eyes. The retina of the eye contains "classical" photoreceptors ("rods" and "cones"), which are used for conventional vision. But the retina also contains specialized ganglion cells that are directly photosensitive, and project directly to the SCN, where they help in the entrainment (synchronization) of this master circadian clock. These cells contain the photopigment melanopsin and their signals follow a pathway called the retinohypothalamic tract, leading to the SCN. If cells from the SCN are removed and cultured, they maintain their own rhythm in the absence of external cues. The SCN takes the information on the lengths of the day and night from the retina, interprets it, and passes it on to the pineal gland, a tiny structure shaped like a pine cone and located on the epithalamus. In response, the pineal secretes the hormone melatonin.[73] Secretion of melatonin peaks at night and ebbs during the day and its presence provides information about night-length.

Prostaglandin

The prostaglandins are a group of lipids made at sites of tissue damage or infection that are involved in dealing with injury and illness. They control processes such as inflammation, blood flow, the formation of blood clots and the induction of labour. having diverse hormone-like effects in animals. Prostaglandins have been found in almost every tissue in humans and other animals. They are derived enzymatically from the fatty acid arachidonic acid.[2] Every prostaglandin contains 20 carbon atoms, including a 5-carbon ring. They are a subclass of eicosanoids and of the prostanoid class of fatty acid derivatives. The structural differences between prostaglandins account for their different biological activities. A given prostaglandin may have different and even opposite effects in different tissues in some cases. The ability of the same prostaglandin to stimulate a reaction in one tissue and inhibit the same reaction in another tissue is determined by the type of receptor to which the prostaglandin binds. They act as autocrine or paracrine factors with their target cells present in the immediate vicinity of the site of their secretion. Prostaglandins differ from endocrine hormones in that they are not produced at a specific site but in many places throughout the human body. Prostaglandins are powerful locally acting vasodilators and inhibit the aggregation of blood platelets. Through their role in vasodilation, prostaglandins are also involved in inflammation. They are synthesized in the walls of blood vessels and serve the physiological function of preventing needless clot formation, as well as regulating the contraction of smooth muscle tissue.[3] Conversely, thromboxanes (produced by platelet cells) are vasoconstrictors and facilitate platelet aggregation. Their name comes from their role in clot formation (thrombosis).

Why is it important that long wear contacts be permeable to oxygen?

The rate of oxygen to the eye, leading to greater eye health and comfort, is the biggest reason. These lenses allow people to: wear contact lenses longer throughout the day. have less redness. NASA invented oxygen permeable contact lenses.

What is the point of using screws with a Phillips head, flathead, allen, hex, etc. instead of just having one universal screw type?

The reason for the different styles is cost and torque. The slotted head screws are cheap and easy to make. But they're completely useless for powered screwdrivers and you can't put much torque on the screw without it either slipping out or stripping the head (and maring the surface of whatever you're screwing). Phillips screws are self-centering, making powered screwdrivers possible. They're somewhat more expensive to produce than slotted-head. They tend to 'cam-out' easily under torque, making it hard to apply much torque. I've heard they were designed that way to prevent overtightning. However, it's not good for exposed fasteners to look stripped. Robertson-head and allen-head fasteners can handle more torque than phillips-head fasteners, but are more expensive. Because the bottom of the hole is flat (unlike the pointed end of the phillips), there's more contact area and so it's less likely to cam-out. The robertson-head is cheaper than the allen-head, but the allen-head has six points of contact rather than 4, making it less prone to rounding out the hole. The Torx-head fasteners solve the problem of rounding/stripping by having the flat bottom of the robertson/allen that reduces cam-out, but it has much better contact with the driving bit to prevent stripping the head. The points of the 'star' on the driving bit engage the recesses on the screw at nearly right angles, so it has a very positive contact. Torx is becoming more and more popular because of that, particularly in assembly-line work. Because they're less likely than a phillips to be damaged when tightening, the allen (internal hex) heads are often used for exposed ('decorative') fasteners on 'some assembly required' furniture. It's also very cheap to make the allen keys, so they usually include one with the fasteners.

Squid giant synapse

The squid giant synapse is a chemical synapse found in squid. It is the largest chemical junction in nature. The squid giant synapse (Fig 1) was first recognized by John Zachary Young in 1939. It lies in the stellate ganglion on each side of the midline, at the posterior wall of the squid's muscular mantle. Activation of this synapse triggers a synchronous contraction of the mantle musculature, causing the forceful ejection of a jet of water from the mantle. This water propulsion allows the squid to move rapidly through the water and even to jump through the surface of the water (breaking the air-water barrier) to escape predators.

Why do airplanes need to fly so high?

There are generally a few reasons. One of the biggest being that higher altitude means thinner atmosphere and less resistance on the plane. There's also the fact that terrain is marked by sea level and some terrains may be much higher above sea level than the takeoff strip and they need to be able to clear those with a lot of room left over. Lastly, another good reason is simply because they need to be above things like insects and most types of birds. Because of the lower resistance, at higher altitudes, the plane can almost come down to an idle and stay elevated and moving so it also helps a lot with efficiency.

How were ancient rope suspension bridges built across large gaps in terrain?

There were a few ways to do it, but one common way was to get one or two long pieces of thin string or rope across, then use that to pull a larger rope across. One method was to tie two pieces of light rope to an arrow and shoot the arrow over the river. One of the ropes would be tied taught at each end. Then, from the side opposite the archer a ring was put around the taught rope. Then the end of the other light rope was tied around the ring, and a thicker rope was also tied around the ring. Then the archer would pull on the light rope, and it would drag the thick rope back to the archer's side of the river. From there, the archer could tie an even thicker rope and have it sent back over. When they built the first bridge over Niagara Falls, a similar method was used. But instead of archers, a boy flew a kite over the falls. https://en.wikipedia.org/wiki/Niagara_Falls_Suspension_Bridge

If you put a Garden in the ISS, Could you have infinite oxygen?

This is called bioregenerative life support, and in falls in the broader category of Environmental Control and Life Support Systems (ECLSS). Short answer is yes, but not infinite and not without disadvantages. Note that plants aren't the only way to recycle oxygen. Currently they use water electrolysis to inject oxygen into their atmosphere, and use the resulting hydrogen to apply the Sabatier reaction with CO2. This yields methane as a waste product, and captures oxygen from CO2 in the form of water, so they can later apply electrolysis again and recycle oxygen. A clear advantage of using plants would be trapping carbon into edible forms, so you'd be recycling not only oxygen but also food. However, not all plants have a 100% edible mass, and those who do usually offer very few calories if not negligible (e.g. lettuce). A problem with plants is that they could die under some conditions, so this system isn't 100% reliable. And in any case, whether bioregenerative or artificially regenerative, you can never achieve a 100% reuse of ECLSS resources. But it's ok, you can get close enough and have them for a long time relying on very little supplies. They are actively researching on this topic. Recently they've been able to grow lettuce in the ISS. ____________ How many plants would you need to create enough oxygen for, say, 6 men? I'm imaging a LOT of plants would be needed I've seen different estimates for those requirements, ranging from a few 10s of m2 to 100s of m2 per person. Most likely that it depends a lot on what you're planning to plant, and everyone has a different concept for that. How many plants is even harder to answer because they range a lot in size (i.e. you can have many small plants or few big ones). What they all agree is that each person needs around 2500 kcal/day of food and 0.5-0.8 kg/day of oxygen.

ingestible thermometer

To monitor astronaut body temperature during space flight, NASA teamed with Johns Hopkins University in the late 1980s to develop a thermometer pill called the Ingestible Thermal Monitoring System. ... A recorder outside of the body can read this signal and display a core body temperature and other vital statistics A pill thermometer is an ingestible thermometer that allows a person's core temperature to be continuously monitored. It was developed by NASA in collaboration with Johns Hopkins University for use with astronauts.[1] Since then the pill has been used by mountain climbers,[2] football players, cyclists[3], F1 drivers.[4] and in the mining industry.

Tyrosine

Tyrosine is an amino acid. Amino acids are the building blocks of protein. The body makes tyrosine from another amino acid called phenylalanine. Tyrosine can also be found in dairy products, meats, fish, eggs, nuts, beans, oats, and wheat. Aside from being a proteinogenic amino acid, tyrosine has a special role by virtue of the phenol functionality. It occurs in proteins that are part of signal transduction processes and functions as a receiver of phosphate groups that are transferred by way of protein kinases. Phosphorylation of the hydroxyl group can change the activity of the target protein, or may form part of a signaling cascade via SH2 domain binding. A tyrosine residue also plays an important role in photosynthesis. In chloroplasts (photosystem II), it acts as an electron donor in the reduction of oxidized chlorophyll. In this process, it loses the hydrogen atom of its phenolic OH-group. This radical is subsequently reduced in the photosystem II by the four core manganese clusters.

ascorbic acid

Vitamin C Vitamin C, also known as ascorbic acid and ascorbate, is a vitamin found in various foods and sold as a dietary supplement.[3] It is used to prevent and treat scurvy.[3] Vitamin C is an essential nutrient involved in the repair of tissue and the enzymatic production of certain neurotransmitters.[3][4] It is required for the functioning of several enzymes and is important for immune system function.[4][5] It also functions as an antioxidant. Evidence does not support its use for the prevention of the common cold.[6][7] There is, however, some evidence that regular use may shorten the length of colds.[8] It is unclear whether supplementation affects the risk of cancer, cardiovascular disease, or dementia.[9][10] It may be taken by mouth or by injection. Vitamin C is generally well tolerated.[3] Large doses may cause gastrointestinal discomfort, headache, trouble sleeping, and flushing of the skin.[3][7] Normal doses are safe during pregnancy.[1] The United States Institute of Medicine recommends against taking large doses. Vitamin C is an essential nutrient for certain animals including humans. The term vitamin C encompasses several vitamers that have vitamin C activity in animals. Ascorbate salts such as sodium ascorbate and calcium ascorbate are used in some dietary supplements. These release ascorbate upon digestion. Ascorbate and ascorbic acid are both naturally present in the body, since the forms interconvert according to pH. Oxidized forms of the molecule such as dehydroascorbic acid are converted back to ascorbic acid by reducing agents.[4]

Asked my chemistry teacher (first year of highschool) this "Why do we use the mole (unit) instead of just using the mass (grams) isn't it easier to handle given the fact that we can weigh it easily? why the need to use the mole?"

We use moles instead of mass since it accurately shows how many molecules of a substance we have. The chemistry behind reactions is dependent on the number of molecules present, not their mass. To put more simply, it's more important to know many ingredients you have for making a hamburger, then it is to know how much the ingredients weigh. It's more important to have two buns instead of just knowing you have 100g of buns.

Do you think we will ever encounter with extraterrestrial life ?

Yes. Life is inextricably bound to entropy. As soon as we have probes of sufficient resolving power on Mars, Eurpoa, etc., we will find life--or at least the remnants of it. Hyperpiezophiles, cryptoendoliths, hypoliths . . . I think it likely that we will find at least some sort of extraterrestrial extremophile within the next few decades. Now, intelligent life? That's another question entirely.

Do heavily forested regions of the world like the eastern United States experience a noticeable difference in oxygen levels/air quality during the winter months when the trees lose all of their leaves?

Yes. Here is an excellent map showing accurately modeled atmospheric levels of CO2 from satellite and ground measurements taken during a year, for example. You can easily see humans emitting it, and then forested regions sucking it up. Unless it's winter in that hemisphere, in which case it just swirls around until spring. Other gas levels show similar seasonal patterns. https://www.youtube.com/watch?v=x1SgmFa0r04&app=desktop

Carnitine

a nonessential, nonprotein amino acid made in the body from lysine that helps transport fatty acids across the mitochondrial membrane is a quaternary ammonium compound[1] involved in metabolism in most mammals, plants, and some bacteria.[2] Carnitine exists as one of two stereoisomers (the two enantiomers d-carnitine and l-carnitine). Many eukaryotes have the ability to synthesize carnitine, including humans. Humans synthesize carnitine from the substrate TML (6-N-trimethyllysine), which is in turn derived from the methylation of the amino acid lysine. TML is then hydroxylated into hydroxytrimethyllysine (HTML) by trimethyllysine dioxygenase, requiring the presence of ascorbic acid and iron. HTML is then cleaved by HTML aldolase (a pyridoxal phosphate requiring enzyme), yielding 4-trimethylaminobutyraldehyde (TMABA) and glycine. TMABA is then dehydrogenated into gamma-butyrobetaine in an NAD+-dependent reaction, catalyzed by TMABA dehydrogenase. Gamma-butyrobetaine is then hydroxylated by gamma butyrobetaine hydroxylase (a zinc binding enzyme[6]) into l-carnitine, requiring iron in the form of Fe2+. Carnitine is involved in transporting fatty acids across the mitochondrial membrane, by forming a long chain acetylcarnitine ester and being transported by carnitine palmitoyltransferase I and carnitine palmitoyltransferase II.[8] Carnitine also plays a role in stabilizing Acetyl-CoA and coenzyme A levels through the ability to receive or give an acetyl group.[1]

counterfactual

an educated guess as to what would have happened had a policy or an event not occurred For example, the statement "If Joseph Swan had not invented the modern incandescent light bulb, then someone else would have invented it anyway" is a counterfactual, because in fact, Joseph Swan invented the modern incandescent light bulb.

Endolith

an organism that lives inside rock An endolith is an organism that lives inside rock, coral, animal shells, or in the pores between mineral grains of a rock. Many are extremophiles, living in places long imagined inhospitable to life.

beta cells

any of the insulin-producing cells Beta cells (β cells) are a type of cell found in pancreatic islets that synthesize and secrete insulin and amylin. Beta cells make up 50-70% of the cells in human islets.[1] In patients with type I or type II diabetes, beta-cell mass and function are diminished, leading to insufficient insulin secretion and hyperglycemia. The primary function of a beta cell is to produce and release insulin and amylin. Both are hormones which reduce blood glucose levels by different mechanisms. Beta cells can respond quickly to spikes in blood glucose concentrations by secreting some of their stored insulin and amylin while simultaneously producing more.[3]

When you turn a flashlight on, it gets lighter. E=mc²

entropy bruh. charging a device is increasing the mass by pumping electrons into it.

Intestinal villi

finger-like projections of the small intestine; increase surface area for absorption of nutrients in the small intestine Intestinal villi (singular: villus) are small, finger-like projections that extend into the lumen of the small intestine. Each villus is approximately 0.5-1.6 mm in length (in humans), and has many microvilli projecting from the enterocytes of its epithelium which collectively form the striated or brush border. Each of these microvilli are much smaller than a single villus. The intestinal villi are much smaller than any of the circular folds in the intestine. Villi increase the internal surface area of the intestinal walls making available a greater surface area for absorption. An increased absorptive area is useful because digested nutrients (including monosaccharide and amino acids) pass into the semipermeable villi through diffusion, which is effective only at short distances. In other words, increased surface area (in contact with the fluid in the lumen) decreases the average distance travelled by nutrient molecules, so effectiveness of diffusion increases. The villi are connected to the blood vessels so the circulating blood then carries these nutrients away.

Civil liberties

freedoms to think and act without government interference or fear of unfair legal treatment Constitutional freedoms guaranteed to all citizens individual rights protected by law from unjust governmental or other interference.

Does the size of a creature, or the size of its eye, affect what can be seen by the "naked eye"? for example, can ants see things we consider microscopic? are ants microscopic to elephants?

in terms of optics and photoreceptor density, which are the principal factors in a creature's visual resolution ("details per degree of visual angle"), there probably isn't much real difference between the angular visual resolution of e.g. a jumping spider and a house cat - both can see, at best, about ~10 details per degree. so if you shrank a cat down to spider size (or vice versa), they'd both have similar limits to the smallest things they could see. but since they're different sizes, the sizes of the things they can see will also scale; since the eye is getting smaller, its 'depth of field' is getting proportionally shorter. so if a jumping spider's eye is a thousand times smaller than a cat's eye, it can potentially resolve details that are a thousand times smaller than what a cat's can resolve. a cat can never get optically close enough to a grain of sand to make it a degree wide, so that it could see 'ten details' on its surface, while this optical distance is easily available to the spider. the caveat to this general scalability of vision is in the "noisiness" of light, i.e. factors like diffraction (limitation in how small of a point can be focused) or chromatic aberration (the difference in focal distance for different light wavelengths) - for a big eye with a big pupil, this noisiness is insignificant, but for a spider's eye it is getting significant, since all of us are looking at more-or-less the same light bandwidth. jumping spiders, for example, deal with this by having retinas at different focal depths to try to account for chromatic aberration. but this stuff gets complicated, especially considering that the range in optical quality and photoreceptor density across species washes out most of these limitations. i think you could safely suppose that, in the range of terrestrial creature sizes, vision basically scales with size.

encephalitis

inflammation of the brain usually caused by a virus

Amylin

slows glucose absorption in small intestine; suppresses glucagon secretion Amylin, or islet amyloid polypeptide (IAPP), is a 37-residue peptide hormone.[4] It is cosecreted with insulin from the pancreatic β-cells in the ratio of approximately 100:1 (insulin:amylin). Amylin plays a role in glycemic regulation by slowing gastric emptying and promoting satiety, thereby preventing post-prandial spikes in blood glucose levels. Amylin functions as part of the endocrine pancreas and contributes to glycemic control. The peptide is secreted from the pancreatic islets into the blood circulation and is cleared by peptidases in the kidney. It is not found in the urine. Amylin's metabolic function is well-characterized as an inhibitor of the appearance of nutrient [especially glucose] in the plasma.[13] It thus functions as a synergistic partner to insulin, with which it is cosecreted from pancreatic beta cells in response to meals. The overall effect is to slow the rate of appearance (Ra) of glucose in the blood after eating; this is accomplished via coordinate slowing down gastric emptying, inhibition of digestive secretion [gastric acid, pancreatic enzymes, and bile ejection], and a resulting reduction in food intake. Appearance of new glucose in the blood is reduced by inhibiting secretion of the gluconeogenic hormone glucagon. These actions, which are mostly carried out via a glucose-sensitive part of the brain stem, the area postrema, may be over-ridden during hypoglycemia. They collectively reduce the total insulin demand. Amylin also acts in bone metabolism, along with the related peptides calcitonin and calcitonin gene related peptide.

Chondrite

the first rock-sized bodies that formed in the solar nebula from dust grains A 'Chondrite' is a stony meteorites that has not been modified, by either melting or differentiation of the parent body.<note>The use of the term non-metallic does not imply the total absence of metals.

Inland taipan

worlds most venomous snake The Inland taipan (Oxyuranus microlepidotus) is considered the most venomous snake in the world with a murine LD 50 value of 0.025 mg/kg SC. Ernst and Zug et al. 1996 list a value of 0.01 mg/kg SC, which makes it the most venomous snake in the world in their study too.


Conjuntos de estudio relacionados

PRM 380: Wilderness & Parks in America Final Exam Study Guide

View Set

Final Lab Practical, Lab Practical 1

View Set

Week 3 funds- 10,11,12,13,35,37&14

View Set

World Geography Map & Match Study Guide

View Set

Module 1: Factors Influencing Learning

View Set