Physics <- StackExchange top 100

1: Gauge pressure vs. absolute pressure? (score 814145 in 2018)

Question

What are the key differences between Gauge pressure and absolute pressure?

Are there any other forms of pressure?

Answer accepted (score 25)

just wiki it. Anyways I will give you a oneliner from wiki itself-

  • Absolute pressure is zero-referenced against a perfect vacuum, so it is equal to gauge pressure plus atmospheric pressure.

  • Gauge pressure is zero-referenced against ambient air pressure, so it is equal to absolute pressure minus atmospheric pressure. Negative signs are usually omitted.

  • Differential pressure is the difference in pressure between two points.

Answer 2 (score 4)

It’s just a matter of defining your ‘zero point’.

In a real, actual gauge, pressure is measured relative to the atmospheric pressure. If there was 1 atmosphere of pressure inside a container (so it’s the same pressure inside the container as it is outside it), the gauge will not read 1 atm, but rather 0 atm, as the pressure inside the container would just be the same as the pressure outside. Relative to the outside world there would be no pressure in the container. This is gauge pressure.

Absolute pressure is technically what we think of when we say pressure - the force that the gas is applying per unit area of the container.

If the gas is applying 101,300 Newtons per square meter, then the absolute pressure would be 101.3 kPa. On the other hand, the gauge pressure would be 0 kPa, as 101.3 kPa also happens to be the pressure of the atmosphere outside the container.

Answer 3 (score 3)

Assuming you are in air at sea level and you have an open container the pressure inside and outside will be the same - so a pressure gauge will read zero. That’s gauge pressure PSIG

But there is 1 atmosphere of pressure inside the container - so 1atm of absolute pressure.

Now pump the air out of the container to give a vacuum, you have zero absolute pressure and -1 atmosphere of gauge pressure. Although negative gauge pressures are rarely used as they are confusing

2: What is the difference between weight and mass? (score 487876 in 2012)

Question

My science teacher is always saying the words “weight of an object” and “mass of an object,” but then my physics book (that I read on my own) tells me completely different definitions from the way these words are used in my science class… so which is right?

What is the difference between the weight of an object and the mass of an object?

Answer accepted (score 38)

Weight is the force with which gravity pulls on a mass.

Maybe the simplest way to explain the difference is that on the Moon or on Mars, your weight is reduced because gravity is weaker there, but your mass is still the same.

Answer 2 (score 18)

The mass, strictly the inertial mass, relates the acceleration of a body to the applied force via Newton’s law:

\[ F = ma \]

So if you apply a force of 1 Newton to a mass of 1kg it will accelerate at 1m/s\(^2\). This is true whether the object is floating in space or in a gravity field e.g. at the Earth’s surface.

The weight is the force a body exerts when it is in a gravitational field. The weight depends on the gravitational field. For example the weight of a 1kg mass at the Earth’s surface is 9.81 Newtons, while at the surface of Mars it’s about 3.5 Newtons.

This is possibly a bit too much info: if so ignore this last paragraph. Although weight specifically means the force exerted in a gravitational field, Einstein told us that sitting stationary in a gravitational field is equivalent to being accelerated in the absence of gravity. The inertial mass defined using Newton’s laws is the same as the gravitational mass defined by the force a body exerts in a gravitational field. So if you take a 1kg mass at the Earth’s surface, the weight of 9.81 Newtons it exerts is exactly the same as the force you’d need to accelerate the 1kg mass at 9.81m/s\(^2\).

Answer 3 (score 14)

Yes of course, According to physics the Mass and Weight are different from each other. Following is their main difference,

Mass:

  • Mass is the amount of matter contained in a body.

  • Mass of the body is the constant quantity and does not change with the change of position or location.

Weight:

  • Weight is the force exerted by a body when it is in a gravitational field. It depends upon the gravitational field
  • Weight of the body is the variable quantity and changes with the change in position and location due to the acceleration of the gravity acting on it. Yes they are used at different places and time.

So these are the basic difference between Weight and Mass of the object

3: Example of situation with conduction, convection, and radiation (score 460299 in 2018)

Question

I am studying heat transfer and have learned there are three kinds of heat transfer: conduction, convection, and radiation. Some examples are:

  • Conduction:

    • Touching a stove and being burned
    • Ice cooling down your hand
    • Boiling water by thrusting a red-hot piece of iron into it
  • Convection:

    • Hot air rising, cooling, and falling (convection currents)
    • An old-fashioned radiator (creates a convection cell in a room by emitting warm air at the top and drawing in cool air at the bottom).
  • Radiation:

    • Heat from the sun warming your face
    • Heat from a lightbulb
    • Heat from a fire
    • Heat from anything else which is warmer than its surroundings.

I have heard that for a vacuum flask all three types are important. Are there other examples where all three are important?

Answer accepted (score 4)

A good example would be heating a tin can of water using a Bunsen burner. Initially the flame produces radiation which heats the tin can. The tin can then transfers heat to the water through conduction. The hot water then rises to the top, in the convection process.

The atmosphere would be another example. The atmosphere is heated by radiation from the Sun, the atmosphere exhibits convection as hot air near the equator rises producing winds, and finally there is conduction between air molecules, and small amounts of air-land conduction.

Answer 2 (score 3)

Actually a good electric oven is a great example of all three:

  • The metal that gets red hot emits light (blackbody radiation);
  • There is the obvious convection of air in the oven, as you mentioned - though this you’ll just feel when opening up the oven for a brief time, as gases are not storing that much energy or transmitting it very well anyway;
  • And there is conduction of heat, as you can feel when you touch any surface inside the oven.

While maybe not the best examples (as some of then involve more complex physics), you can actually come up with many other examples with more exotic nature: a foundry furnace, lava in contact with sea water, and the Sun. I’ll let you figure out those other ones (but all those examples have quite a bit in common).

4: Relation between pressure, velocity and area (score 455402 in 2014)

Question

In a nozzle, the exit velocity increases as per continuity equation \(Av=const\) as given by Bernoulli equation (incompressible fluid). Pressure is inversely proportional to velocity, so we have lower pressure at the exit of the nozzle. But as per definition of pressure, \(P=F/A\), i.e., pressure is inversely proportional to the area which contradicts the above explanation on basis of continuity and Bernoulli equation.

Which is true? What relation is true for compressible flow?

Answer accepted (score 15)

I disagree with the most voted answer, by CAGT. He says “This area is completely different to the one above”, but this means nothing. The equation \(p = {F \over A}\) mentioned by the author does hold, and there is no contradiction or paradox in it.

In fact, the equation \(p = {F \over A}\) holds not only here but anywhere else in physics. You may write it in any situation, and it will always be true.

Let’s begin with a small correction. Your \(Av = \text{constant}\) equation is not Bernoulli, but mere conservation of mass. Here’s Bernoulli. This is what gives, in your words, “pressure is inversely proportional to velocity.” \[{p \over \rho} + {v^2 \over 2} + gz = \text{constant}\]

So your problem is with \(p = {F \over A}\). Well, there’s no problem with it. What is really wrong with your thinking is that you’re not paying attention to the equation: the force \(F\) changes too.

Let’s recap what happens in your situation:

  1. There’s a change in cross-sectional area: \(A_2 &lt; A_1\)
  2. Thanks to conservation of mass, (1) implies \(v_2 &gt; v_1\)
  3. Thanks to Bernoulli, (2) implies \(p_2 &lt; p_1\)

Ok, now look at this.

The dark blue rectangle on the left is what we call an element. Like the rest of the flow in the bigger section, it flows with velocity \(v_1\). It is delimited left and right by faces with area \(A_1\). Note that, since the liquid left and right of it has pressure \(p_1\), this element is compressed by forces \(F_1 = p_1 A_1\) on each side.

Now to the element on the smaller section, which flows faster. Its cross-sectional area is smaller. The pressure left and right of it is also smaller. As a result, the forces compressing it, \(F_2 = p_2 A_2\), are also smaller.

So, \(p = {F \over A}\) still holds. Yes, when the situation changes, \(A\) is smaller, which by itself would make \(p\) bigger. However, as we saw above, then new \(F\) is smaller than the old one too, which by itself would make \(p\) smaller. The net effect of \(p_2 &lt; p_1\) (which we know beforehand from Bernoulli) means, therefore, simply that \(F\) has diminished more than \(A\) did.

Answer 2 (score 5)

The formula

\[Av=const\]

comes from

\[\rho_1A_1V_1=\rho_2A_2V_2 \] (at reasonably low speeds, where fluid density can be assumed constant, \(\rho_1=\rho_2\))

So you end up with:

\[A_1V_1=A_2V_2=constant \] The units of this is the mass flow through area A per unit time (ie \(kg/s\)).

The formula
\[P=\frac{F}{A}\]

Is Pressure of a perpendicular Force applied on the Area of a surface. This Area is completely different to the one above, as there is no mass flowing through this one.

Answer 3 (score -2)

AV = constant is could only in ideal condition for continuity flow. Real condition is to concern with losses due to internal surface of pipe roughness, pipe bend, sudden enlargement and contraction. A coefficient Cd values from 0.96 to 0.98 must be considered. Internal surface friction creates pressure drop. Pi = F/A is not constant by this reason.
Flow rate Q can be in the form of Q = cd.AV, pipe entrance ( developing region), pipe join, pipe bend and other physical conditions might cause significant losses which in turn affecting continuity in term of velocity. Flow rate indeed is constant, but somehow velocity may be reduced. Flow regime can also have influences on the case of flow velocity. One can rely on Moody Diagram to look into this. Where friction is defined perfectly.

5: Why do bulbs glow brighter when connected in parallel? (score 292066 in 2018)

Question

Consider a circuit powered by a battery. If light bulbs are attached in parallel, the current will be divided across all of them. But if the light bulbs are connected in series, the current will be the same in all of them. Then it looks like the bulbs should be brighter when connected in series, but actually, they are brighter when connected in parallel. Why is that?

Answer accepted (score 10)

The bulbs will only appear brighter if the available current to the system is not limited. In that case the series bulbs will have a lower voltage across each individual bulb and they will appear dimmer. If the power input to the circuit is a constant than the total wattage output from all bulbs is also constant and the bulbs will all appear the same (assuming the filaments for the bulbs are all identical resistance).

In a typical simple circuit the power source will be a battery which attempts to hold a constant voltage across the circuit. In this case the voltage across the bulbs in parallel will be equal to the voltage of the battery and the current through the bulb will be defined by \(V = IR\) where \(R\) is the resistance of the filament. This means more current (and thus more power) will be drawn from a battery into the parallel circuit than a series one and the parallel circuit will appear brighter (but will drain your battery faster).

Answer 2 (score 7)

I crafted this answer for this question in the first place but since it got closed, I will post it here to at least contribute.

1) The brightness of a light bulb depends on various parameters, most of them being intrinsic properties of light bulbs. Essentially, the brightness depends upon the luminous flux of the light source. However, light sources which emit light with different wavelengths but same luminous flux can be perceived to have different brightness levels. Therefore, luminous flux is useful if we are comparing the brightness of light sources which emit light with same wavelength.

For incandescent light bulbs, brightness or luminous flux is directly related to the heat energy due to the flowing current in a conductor since these type of light bulbs are used by heating the filament until it emits visible light(assuming we have an incandescent light bulb here because other light sources like LED will have different properties). What is the term used to specify the heat energy generated by the flowing current per unit time? Power. Therefore, we should increase the power due to a current source as much as possible to increase the brightness of the light bulb. To find which parameters we should play with to increase the power, we can use Joule-Lenz law which states that: \[ Q\propto I^2Rt \] Therefore, since power is \(\frac Wt\), we can derive the expression that is proportional to the power: \[ P\propto I^2R \] However, this expression can deceive you to think that increasing the resistance of the light bulb increases the brightness. Since altering the resistance will also decrease the current passing through the light bulb and even exponentially decrease the power, we can derive a more reliable formula by using the specialized form of Ohm’s law(\(V=IR\)). Assuming we have an ideal conductor here, one can find that; \[ P\propto VI \] Overall, you need to increase the emf of the current source to increase the brightness of the light bulb.

2) As all answerers pointed it out, when we wire light bulbs in parallel instead of in series, we decrease the equivalent resistance of the circuit; and therefore increase the current passing through the filaments of the light bulbs. This leads to more power each light bulb is getting(due to Joule-Lenz law) and brighter light bulbs.

Answer 3 (score 2)

Let’s first look at cases where this is not the case.
When the lamps are connected to a constant current source, current is indeed “divided” over the lamps in parallel. Assuming equal resistance R, the bulbs will both see a current of I/2, and the dissipated power in each is I²R/4 or in total I²R/2. In the case of the bulbs in series, the current I will flow through both bulbs, and the power consumed is I²R for each and 2I²R in total. Contrary to the claim in the question, the bulbs in series will burn brighter.

The same could be true when connecting them to a shunt wound DC generator. The bulbs in parallel have lower total equivalent resistance, pulling more current and lowering terminal voltage. Depending on the power rating of the generator and the bulbs, it could be that the bulbs in parallel represent a load too large for the generator, lowering the voltage over the shunt too far, which reduces the magnetic field, which causes the induced voltage to go down, lowering the magnetic field further etc.. The bulbs in series have higher total resistance and won’t pull the voltage down so much. So in this case, they could again be the ones burning brightest.

When the bulbs are connected to a voltage source, the voltage is divided over the two bulbs in series, and the power consumed by each will be U²/4R. The bulbs in parallel each have a voltage of U over them, and therefor each consumes U²/R power. In this case the claim in the question is correct. the bulbs in parallel are brightest. This is the usual situation, voltage sources are much more common than current sources.

The first two examples assume that the bulbs in parallel and the bulbs in series weren’t all (four) connected at the same time. If that was the case, those in parallel would always burn brighter.

Note: the assumption that incandescent light bulbs have constant resistance is completely wrong! Resistance changes with temperature:
For some metals, a linear function fits best: \(R=R_0[1+\alpha(T-T_0)]\)
For others, like tungsten, a power function fits better:

\(\rho=0.06052*T^{1.203}\)
with \(\rho\) in \(n\Omega.m\) T in Kelvin

A table of measured values can be found here At 2400°K, the resistance will be 14 times higher than at 273°K

All explanations given above are therefor only qualitatively correct (the result won’t change, the same bulbs will be the brightest). An expression for the dependence of R on U or I can, if needed, be derived from the Stefan–Boltzmann law.

6: How does water evaporate if it doesn’t boil? (score 286183 in 2017)

Question

When the sun is out after a rain, I can see what appears to be steam rising off a wooden bridge nearby. I’m pretty sure this is water turning into a gas.

However, I thought water had to reach 100 degrees C to be able to turn into a gas.

Is there an edge case, for small amounts of water perhaps, that allows it to evaporate?

Answer accepted (score 53)

Evaporation is a different process to boiling. The first is a surface effect that can happen at any temperature, while the latter is a bulk transformation that only happens when the conditions are correct.

Technically the water is not turning into a gas, but random movement of the surface molecules allows some of them enough energy to escape from the surface into the air. The rate at which they leave the surface depends on a number of factors - for instance the temperature of both air and water, the humidity of the air, and the size of the surface exposed. When the bridge is ‘steaming’: the wood is marginally warmer than the air (due to the sun shine), the air is very humid (it has just been raining) and the water is spread out to expose a very large surface area. In fact, since the air is cooler and almost saturated with water, the molecules of water are almost immediately condensing into micro-droplets in the air - which is why you can see them.

BTW - As water vapour is a gas, it is completely transparent. If you can see it then it is steam, which consists of tiny water droplets (basically water vapour that has condensed). Consider a kettle boiling - the white plume only occurs a short distance above the spout. Below that it is water vapour, above it has cooled into steam. Steam disappears after a while, as it has evaporated once again.

Answer 2 (score 21)

For every temperature, there is some amount of water vapor that can exist as gas mixed in with the air. This is called the saturation pressure of water at that temperature. The relative humidity is the amount of water vapor pressure, expressed as a percentage of the saturation pressure. As you increase the temperature, the saturation pressure increases.

Steam is water in its gaseous phase.

You can’t see water vapor, you can’t see steam, but you can see mist, which is liquid water droplets suspended in the air.

When you boil water on the stove, you get steam. This then cools when it comes into contact with the air, increasing the relative humidity above 100%, so the water vapor condenses into mist.

If the relative humidity is bigger than 100%, water vapor will condense from the air, becoming dew and/or mist. If the relative humidity is less than 100%, water will evaporate into the air, becoming water vapor.

If the wooden bridge is warmer than the surrounding air, and the relative humidity is around 100%, then water will evaporate off of the wooden bridge, turning into water vapor (the relative humidity is lower right next to the bridge, because the bridge is warmer). When the air containing this water vapor rises and cools, water condenses out of it, turning into the mist that you see.

Here is a graph of the saturation pressure (from this website). Note that at 100°C, the pressure is \(\approx10^5\) Pa \(=1000\,\)hPa, which is roughly atmospheric pressure. This means that at 100°C, you can have pure water vapor at atmospheric pressure. This is why water boils at 100°C at sea level—a bubble of steam can form below the surface of the water. At higher altitudes, the boiling point can be substantially lower.

Answer 3 (score 3)

Below “boiling point” (not always 100C), water can exist in both gas and liquid phase, and has a temperature-dependent vapour pressure, which represents a point of equilibrium between liquid water wanting to evaporate and water vapour wanting to condense. When liquid water meets dry air, it is not in equilibrium; water molecules evaporate off the surface until the amount of water in the air creates enough vapour pressure to achieve equilibrium.

When water is heated to a temperature of 100C, the vapour pressure equals that of sea-level air pressure. Since the air pressure can no longer overcome the vapour pressure of the water, the water boils.

At higher elevations, air pressure is lower; as water is heated, its vapour pressure overcomes ambient air pressure at a lower temperature i.e. the boiling point is lower.

Vice-versa for higher pressures.

As for the steam rising off the bridge, that is actually water vapor condensing. Very close to the wet surfaces, the air is saturated with water vapor, which is transparent. It is also less dense than dry air, so it rises. As it rises away from what is likely a warm surface, it cools, As it cools, it condenses, but it is also mixing with more drier air, so it evaporates again and disappears.

7: Jumping into water (score 256992 in 2018)

Question

Two questions:

  1. Assuming you dive head first or fall straight with your legs first, what is the maximal height you can jump into water from and not get hurt?
    In other words, an H meter fall into water is equivalent to how many meters concrete-pavement fall, force wise? (I’m assuming the damage caused will be mainly due to amount of force and not the duration)

  2. Assume you jump head first and hold a sharp and strong long object that cuts the water before you arrive, will that make the entrance to the water more smooth and protect you?enter image description here

Answer accepted (score 9)

Answering your questions in reverse order:

Yes, a long pointy object (like your arms over your head, in a dive, or your pointed toes in a feet-first entry) will make a big difference. Remember the tongue-in-cheek adage, “it’s not the fall that kills you; it’s the sudden stop?” That is exactly what differentiates a fall onto concrete from a fall into water: how sudden is the stop. And making that stop LESS sudden (decreasing the magnitude of deceleration during the stop) is exactly how airbags save your life in a car crash. One can decrease the magnitude of deceleration by reducing the ratio \((\Delta V / \Delta t)\). Since there is roughly a linear relationship between time and distance traveled during the instant of impact, you can achieve the same effect by reducing the ratio \((\Delta V / \Delta s)\) where \(s\) = distance traveled during the deceleration event. The easiest way to do this is to lengthen \(s\).

One thing to remember about the water fall statistics is that a large number of them are likely “unpracticed”. These are not olympic divers working up to 250 feet. A large proportion of them are unconditioned people forced into a water “escape”; or, worse, are people TRYING to die.

Assuming you are doing the right thing, and optimizing your form for water entry, you will simultaneously be minimizing your wind resistance during the fall:

1.) A fall from 30 feet will result in a velocity of roughly 44 ft/s = 30 mph.

2.) A fall from 100 feet will result in a velocity of roughly 80 ft/s = 54 mph.

3.) A fall from 150 feet will result in a velocity of roughly 97 ft/s = 66 mph.

4.) A fall from 250 feet will result in a velocity of roughly 125 ft/s = 85 mph.

The first case is a tower jump I did for the Navy, and is trival for anyone who is HWP and doesn’t belly flop. The second is an approximation of a leap from a carrier deck, which the tower jump was supposed to teach you how to survive (be able to swim after the fall). The third is only 20% faster entry speed (and force) and should be survivable by anyone in good shape and able to execute good form (pointed toe entry, knees locked, head up, arms straight up). The La Quebrada cliff divers routinely dive from 125 feet as a tourist attraction. If forced to choose, I’d pick a feet-first entry at 150 feet over a dive at 125.

So the interesting part is the stretch from 150 to 250 feet. My guess is that the limit for someone voluntarily performing repeated water dives/jumps from a height of \(x\) will show \(x\) to be somewhere around \(225 \text{ feet} \pm 25 \text{ feet}\).

EDIT: There are documented cases of people surviving falls from thousands of feet (failed parachute) onto LAND. These freaky cases of surviving terminal velocity falls do not answer the question practically; but they are there. For example, Vesna Vulović is the world record holder for the biggest surviving fall without a parachute.

Answer 2 (score 5)

From searching I find that the survival rate from a fall depends on the deceleration rate that the body undergoes on impact, and that depends on the surface of the fall. Water is less punishing than land.

Here is a quote, which I will not source, but can be found on the net.

Stone states that jumping from 150 feet (46 metres) or higher on land, and 250 feet (76 metres) or more on water, is 95% to 98% fatal. 150 feet/46 metres, equates to roughly 10 to 15 stories in a building, depending on the height of one story. 250 feet is the height of the Golden Gate Bridge in San Francisco.

Most publications are behind pay walls and have to do with statistics of suicide survival, not a pleasant topic.

Answer 3 (score 2)

It obviously depends quite strongly on how hydrodynamic your entry is. Try doing a belly flop from 2 meters: ouch! Wheras competition divers routinely dive from 10M. Having grown up in a pool, and jumping/diving off the ten foot board I am amazed at current law, which makes it illegal to lave a 3foot dive without the water being something like 17feet deep. Ours was a mere 9feet, and thousands jumped dived in over a period of decades with noone getting smashed on the bottom. So I suspect if you get the hydrodynamics just right, you can end up pretty deep/fast. Thats the position you need to hit in, but its probably well out on the tail of the distribution. I believe I had heard about a case of someone falling into the ocean from 20,000feet and surviving (must have been a parachute failure or such), and it may have been only an urban legend, but I have no reason to doubt that a lucky landing at terminal velocity is survivable.

8: Virtual vs Real image (score 246533 in 2016)

Question

I’m doing magnification and lens in class currently, and I really don’t get why virtual and real images are called what they are.

A virtual image occurs the object is less than the focal length of the lens from the lens, and a real image occurs when an object is further than focal length.

By why virtual and real? What’s the difference? You can’t touch an image no matter what it’s called, because it’s just light.

Answer accepted (score 39)

You can project a real image onto a screen or wall, and everybody in the room can look at it. A virtual image can only be seen by looking into the optics and can not be projected.

As a concrete example, you can project a view of the other side of the room using a convex lens, and can not do so with a concave lens.


I’ll steal some image from Wikipedia to help here:

First consider the line optics of real images (from http://en.wikipedia.org/wiki/Real_image):

real images formed by a single convex lens of concave mirror

Notice that the lines that converge to form the image point are all drawn solid. This means that there are actual rays, composed of photon originating at the source objects. If you put a screen in the focal plane, light reflected from the object will converge on the screen and you’ll get a luminous image (as in a cinema or a overhead projector).

Next examine the situation for virtual images (from http://en.wikipedia.org/wiki/Virtual_image):

virtual images formed by a single concave lens or convex mirror

Notice here that the image is formed by a one or more dashed lines (possibly with some solid lines). The dashed lines are draw off the back of solid lines and represent the apparent path of light rays from the image to the optical surface, but no light from the object ever moves along those paths. This light energy from the object is dispersed, not collected and can not be projected onto a screen. There is still a “image” there, because those dispersed rays all appear to be coming from the image. Thus, a suitable detector (like your eye) can “see” the image, but it can not be projected onto a screen.

Answer 2 (score 12)

For a real image, rays from a single source point converge to a single point on the other side of the lens. This means that a point on the image remains well-defined after the optical transformation (refraction by lens).

For a virtual image, rays from a single source point diverge after they pass through the lens. This means that the a point on the image is no longer well-defined after the optical transformation. Realistically, what you’ll see on a screen place at the other end of a diverging (concave) lens is a large haze of weak light (since the rays have diverged so greatly).

Note that convex (converging) single lenses always produce real images, while concave (diverging) single lenses always produce virtual images. For mirrors it’s generally the other way round (there are exceptions though). Similarly for double lenses, things are reversed (compared to single lenses).

Answer 3 (score 11)

Operationally, one can distinguish real and virtual images by attaching a small but very powerful light bulb to the source.

If you insert your hand near the real image, the hand will get burned because the light rays from the source actually converge at the point of the real image. If you manage to insert your hand to the point of the virtual image, your hand won’t burn because there are no light rays over there (and sometimes, there is a wall).

An ordinary flat mirror creates a virtual image because there are no light rays on the internal side of the mirror.

The optics in the human eye creates a real image where the retina is located and sensitive cells actually read the information about the original source. The same mechanism works in digital cameras.

Real images may be produced by converging lenses (eye, digital camera) and concave mirrors.

Virtual images may be produced by diverging lenses and convex mirrors, but also by concave mirrors and converging lenses if you place the source within the focal length.

9: Why is this calculation of average speed wrong? (score 244119 in 2018)

Question

A person drove 120 miles at 40 mph, then drove back the same 120 miles at 60 mph. What was their average speed?

The average of the speeds is \[\frac{40\ \text{mph} +60\ \text{mph}}{2} = 50\ \text{mph}\] so the total trip time should be, by the definition of average speed, \[\frac{240\ \text{mi}}{50\ \text{mph}} = 4.8 \ \text{hours}.\]
However, this is wrong, because the trip actually took \(3 + 2 = 5\) hours.

What did I do wrong, and what is the correct way to calculate the average speed?

Answer accepted (score 25)

The reason is because the time taken for the two trips are different, so the average speed is not simply \(\frac{v_1 + v_2}{2}\)

We should go back to the definition. The average speed is always (total length) ÷ (total time). In your case, the total time can be calculated as

\[\begin{align} \text{time}_1 &amp;= \frac{120 \mathrm{miles}}{40 \mathrm{mph}} \\\\ \text{time}_2 &amp;= \frac{120 \mathrm{miles}}{60 \mathrm{mph}} \end{align}\]

so the total time is \(120\mathrm{miles} \times \left(\frac1{40\mathrm{mph}} + \frac1{60\mathrm{mph}}\right)\). The average speed is therefore:

\[\begin{align} \text{average speed} &amp;= \frac{2 \times 120\mathrm{miles}}{120\mathrm{miles} \times \left(\frac1{40\mathrm{mph}} + \frac1{60\mathrm{mph}}\right)} \\\\ &amp;= \frac{2 }{ \left(\frac1{40\mathrm{mph}} + \frac1{60\mathrm{mph}}\right)} \\\\ &amp;= 48 \mathrm{mph} \end{align}\]

In general, when the length of the trips are the same, the average speed will be the harmonic mean of the respective speeds.

\[ \text{average speed} = \frac2{\frac1{v_1} + \frac1{v_2}} \]

Answer 2 (score 7)

\[\mathrm{Average\ Speed = \frac{Total\ Distance}{Total\ time}}\]

So basically,

\(t_1 = 120/40 = 3\ hrs\)

\(t_2 = 120/60 = 2\ hrs\)

Total time \(= 5\ hrs\)

Total distance = \(240\) miles

Average speed$ = 240/5 = 48 mph$

Answer 3 (score 6)

The difficulty is that since the trip at 40mph takes longer, you spend more time going 40mph than you do going 60mph, so the average speed is weighted more heavily towards 40 mph.

When calculating average speeds for fixed distances, it is better think of everything in minutes per mile rather than miles per hour.

60 miles per hour is 1 minute per mile, while 40 miles per hour is 1.5 minutes per mile. Since we travel the same number of miles at each speed, we can now take the mean of these two figures. That’s 1.25 minutes per mile on average. For 240 miles total, 240miles*1.25minutes/mile = 300 minutes = 5 hours.

This method is called finding the “harmonic mean” of the speeds.

10: Cooling a cup of coffee with help of a spoon (score 236115 in 2019)

Question

During the breakfast with my colleagues, a question popped into my head:

What is the fastest method to cool a cup of coffee, if your only available instrument is a spoon?

A qualitative answer would be nice, but if we could find a mathematical model or even better make the experiment (we don’t have the means here :-s) for this it would be great! :-D

So far, the options that we have considered are (any other creative methods are also welcome):

Stir the coffee with the spoon:

Pros:

  • The whirlpool has greater surface than the flat coffee, so it is better for heat exchange with the air.
  • Due to the difference of speed between the liquid and the surrounding air, the Bernoulli effect should lower the pressure and that would cool it too to keep the atmospheric pressure constant.

Cons:

  • Joule effect should heat the coffee.
Leave the spoon inside the cup:

As the metal is a good heat conductor (and we are not talking about a wooden spoon!), and there is some part inside the liquid and other outside, it should help with the heat transfer, right?

A side question about this is what is better, to put it like normal or reversed, with the handle inside the cup? (I think it is better reversed, as there is more surface in contact with the air, as in the CPU heat sinks).

Insert and remove the spoon repeatedly:

The reasoning about this is that the spoon cools off faster when it’s outside.

(I personally think it doesn’t pay off the difference between keeping it always inside, as as it gets cooler, the lesser the temperature gradient and the worse for the heat transfer).

Answer accepted (score 735)

I We did the experiment. (Early results indicate that dipping may win, though the final conclusion is uncertain.)

  • \(H_2O\) ice bath
  • canning jar
  • thermometer
  • pot of boiling water
  • stop watch

There were four trials, each lasting 10 minutes. Boiling water was poured into the canning jar, and the spoon was taken from the ice bath and placed into the jar. A temperature reading was taken once a minute. After each trial the water was poured back into the boiling pot and the spoon was placed back into the ice bath.

equipment


 Method:                  Final Temp.
 1. No Spoon              151 F 
 2. Spoon in, no motion   149 F
 3. Spoon stirring        147 F
 4. Spoon dipping         143 F

Temperature readings have an error \(\pm1\,^o\)F.

chart

 Red   line:  no Spoon
 Green line:  Spoon in, no motion
 Aqua  line:  Stirring
 Blue  line:  Dipping

\[\begin{array}{|c|cl|cl|cl|cl|} \hline \text{Min} &amp; \text{No Spoon} &amp; &amp; \text{Spoon} &amp; &amp; \text{Stirring} &amp; &amp; \text{Dipping} \\ \hline &amp; \text{°F} &amp; \text{°C} &amp; \text{°F} &amp; \text{°C} &amp; \text{°F} &amp; \text{°C} &amp; \text{°F} &amp; \text{°C} \\ \hline 1' &amp; 180 &amp; 82.22 &amp; 175 &amp; 79.44 &amp; 175 &amp; 79.44 &amp; 177 &amp; 80.56 \\ 2' &amp; 174 &amp; 78.89 &amp; 172 &amp; 77.78 &amp; 171 &amp; 77.22 &amp; 173 &amp; 78.33 \\ 3' &amp; 171 &amp; 77.22 &amp; 168 &amp; 75.56 &amp; 167 &amp; 75 &amp; 168 &amp; 75.56 \\ 4' &amp; 168 &amp; 75.56 &amp; 165 &amp; 73.89 &amp; 164 &amp; 73.33 &amp; 164 &amp; 73.33 \\ 5' &amp; 164 &amp; 73.33 &amp; 162 &amp; 72.22 &amp; 161 &amp; 71.67 &amp; 160 &amp; 71.11 \\ 6' &amp; 161 &amp; 71.67 &amp; 160 &amp; 71.11 &amp; 158 &amp; 70 &amp; 156 &amp; 68.89 \\ 7' &amp; 158 &amp; 70 &amp; 156 &amp; 68.89 &amp; 155 &amp; 68.33 &amp; 152 &amp; 66.67 \\ 8' &amp; 155 &amp; 68.33 &amp; 153 &amp; 67.22 &amp; 152 &amp; 66.67 &amp; 149 &amp; 65 \\ 9' &amp; 153 &amp; 67.22 &amp; 151 &amp; 66.11 &amp; 150 &amp; 65.56 &amp; 146 &amp; 63.33 \\ 10' &amp; 151 &amp; 66.11 &amp; 149 &amp; 65 &amp; 147 &amp; 63.89 &amp; 143 &amp; 61.67 \\ \hline \end{array}\]

Answer 2 (score 209)

Stirring will win, hands down, every time.

This is why physicists need to talk to chemists once in a while.

As Georg correctly remarks, the latent heat of vaporization of water is enormous - but he’s wrong about waving the spoon; stirring is the champion here.

Why? Temperature is really the average kinetic energy of the molecules in the bulk substance, which actually have a variety of individual kinetic energies. Stirring is the fastest way to bring high-kinetic-energy outlier water molecules to the surface, where they will overcome the electrostatic bonding mechanisms that keep them in the liquid phase, and jump into the air (vapor phase). This rapid decrease in the high-energy outliers is the quickest way to cool a hot aqueous solution.

It’s similar to stirring iced tea. If you just plop ice cubes into a glass of warm tea, it will take quite a while for the warmer tea to cool; if you stir it vigorously, it will reach a cold equilibrium within seconds; the latent heat of fusion absorbed by the ice melting is similarly enormous.

This kind of thing has a lot of applications to laboratory and industrial chemical processes, surface catalysis, petroleum cracking, yadda yadda. You learn a lot about it in third-year university physical chemistry, and really must master it before or during graduate work as a chemist.

If you want an even faster way to cool a cup of coffee, here’s a tip from my Granddad Parker: forget the spoon and saucer your coffee. In other words, pour the top part of it from the cup into a saucer, and then back again a few times. The large and constantly changing surface area during this process will cause extremely rapid evaporation of those high-energy outliers, much faster than stirring. Saucering was very common up through the Great Depression, which is one of the reasons older coffee sets always included saucers. You also get deep-ish saucers at many restaurants as a holdover from this practice, although I doubt many people do it any more.

Answer 3 (score 64)

Well, if you are only allowed to use a spoon, the fastest way to cool the coffee for drinking is to get a spoonful, blow on it, drink it from the spoon, take a next spoonful. Convection does wonders.

If you are allowed a saucer instead of a spoon, pour a bit of coffee in the saucer, blow on it and drink it.

11: How exactly does time slow down near a black hole? (score 234895 in 2014)

Question

How exactly does time slow down near a black hole? I have heard this as a possible way of time traveling, and I do understand that it is due in some way to the massive gravity around a black hole, but how exactly does that massive gravity slow down time?

Answer accepted (score 31)

This web page provides a good explanation.

To oversimplify the explanation, you have to understand the curvature of space time around a black hole. The basic principle is that because of the curvature of spacetime around a black hole, the amount of “distance” a beam of light has to cover is greater near a black hole. However, to an observer in that gravitational field, light must appear to always be 300,000 km/sec, time has to slow down for that individual as compared to someone outside that gravitational field as related by the time/distance relationship of speed.

Or as the web page says:

If acceleration is equivalent to gravitation, it follows that the predictions of Special Relativity must also be valid for very strong gravitational fields. The curvature of spacetime by matter therefore not only stretches or shrinks distances, depending on their direction with respect to the gravitational field, but also appears to slow down the flow of time. This effect is called gravitational time dilation. In most circumstances, such gravitational time dilation is minuscule and hardly observable, but it can become very significant when spacetime is curved by a massive object, such as a black hole.

A black hole is the most compact matter imaginable. It is an extremely massive and dense object in space that is thought to be formed by a star collapsing under its own gravity. Black holes are black, because nothing, not even light, can escape from its extreme gravity. The existence of black holes is not yet firmly established. Major advances in computation are only now enabling scientists to simulate how black holes form, evolve, and interact. They are betting on powerful instruments now under construction to confirm that these exotic objects actually exist.

This web page provides a large series of links for further research into the subject: http://casa.colorado.edu/~ajsh/relativity.html

Answer 2 (score 13)

A good analogy for the strangeness of space and time around a black hole is traveling from the US to Canada. You feel about the same and the surroundings look the same (like nothing special happening to you when you cross the black hole’s event horizon), and the prices in the stores look about the same, but if you try to use the money you brought with you, you suddenly have to make these non-local corrections. Likewise, your own personal time always “feels” the same when you explore a black hole, but your clock runs slower than someone else’s clock that is farther from the black hole.

And in fact, crossing the black hole’s event horizon is the equivalent of changing your money over to (worthless!!!!) Zimbabwean dollars- your clock seems to stop entirely, from the point of view of someone far from the black hole, even though things seem just fine from your own point of view.

PS- A black hole can be used only for time travel into the future! Just hang out close to the event horizon for a while and then return. Much more time may have passed for everyone else because your clock seemed to run so slowly.

Answer 3 (score 11)

Time slows down near any massive body; black holes are merely the most extreme example. GPS satellites orbiting the Earth have to correct for the fact that time passes very very slightly more slowly on the Earth’s surface than it does in geosynchronous orbit – by about one second per every 60 years.

In a sense, gravity and time dilation are the same thing: they are both consequences of the curvature of spacetime near a massive body. You can’t have one without the other.

12: Why is AC more “dangerous” than DC? (score 226560 in 2013)

Question

After going through several forums, I became more confused whether it is DC or AC that is more dangerous. In my text book, it is written that the peak value of AC is greater than that of DC, which is why it tends to be dangerous. Some people in other forums were saying that DC will hold you, since it doesn’t have zero crossing like that of AC. Many others also say that our heart tries to beat with the frequency of ac which the heart cannot support leading to people’s death. What is the actual thing that matters most?

After all, which is more dangerous? AC or DC?

Answer accepted (score 32)

The RMS (root-mean square) value of an AC voltage, which is what is represented as “110 V” or “120 V” or “240 V” is lower than the electricity’s peak voltage. Alternating current has a sinusoidal voltage, that’s how it alternates. So yes, it’s more than it appears, but not by a terrific amount. 120 V RMS turns out to be about 170 V peak-to-ground.

I remember hearing once that it is current, not voltage, that is dangerous to the human body. This page describes it well. According to them, if more than 100 mA makes it through your body, AC or DC, you’re probably dead.

One of the reasons that AC might be considered more dangerous is that it arguably has more ways of getting into your body. Since the voltage alternates, it can cause current to enter and exit your body even without a closed loop, since your body (and what ground it’s attached to) has capacitance. DC cannot do that. Also, AC is quite easily stepped up to higher voltages using transformers, while with DC that requires some relatively elaborate electronics. Finally, while your skin has a fairly high resistance to protect you, and the air is also a terrific insulator as long as you’re not touching any wires, sometimes the inductance of AC transformers can cause high-voltage sparks that break down the air and I imagine can get through your skin a bit as well.

Also, like you mentioned, the heart is controlled by electric pulses and repeated pulses of electricity can throw this off quite a bit and cause a heart attack. However, I don’t think that this is unique to alternating current. I read once about an unfortunate young man that was learning about electricity and wanted to measure the resistance of his own body. He took a multimeter and set a lead to each thumb. By accident or by stupidity, he punctured both thumbs with the leads, and the small (I imagine it to be 9 V) battery in the multimeter caused a current in his bloodstream, and he died on the spot. So maybe ignorance is more dangerous than either AC or DC.

Answer 2 (score 24)

I think that this page explains it very well: http://www.allaboutcircuits.com/vol_1/chpt_3/3.html

Direct current (DC), because it moves with continuous motion through a conductor, has the tendency to induce muscular tetanus quite readily. Alternating current (AC), because it alternately reverses direction of motion, provides brief moments of opportunity for an afflicted muscle to relax between alternations. Thus, from the concern of becoming “froze on the circuit,” DC is more dangerous than AC.

However, AC’s alternating nature has a greater tendency to throw the heart’s pacemaker neurons into a condition of fibrillation, whereas DC tends to just make the heart stand still. Once the shock current is halted, a “frozen” heart has a better chance of regaining a normal beat pattern than a fibrillating heart. This is why “defibrillating” equipment used by emergency medics works: the jolt of current supplied by the defibrillator unit is DC, which halts fibrillation and gives the heart a chance to recover.

There is a table with bodily effects at http://www.allaboutcircuits.com/vol_1/chpt_3/4.html

Answer 3 (score 4)

If you have a dc voltage of \(x\) volts, this is the maximum voltage you can get from it.

If you have an ac voltage of \(x\) volts, the maximum voltage is more than \(x\) (I forgot how to calculate it, maybe it \(x\sqrt2\), someone correct me if I’m wrong). This is because the voltage rating is the average of the oscillating voltage (after taking all positive).

And, a higher voltage means more dangerous, right?

EDIT: Check this wikipedia example out: http://en.wikipedia.org/wiki/Alternating_current#Example

13: If photons have no mass, how can they have momentum? (score 223130 in 2014)

Question

As an explanation of why a large gravitational field (such as a black hole) can bend light, I have heard that light has momentum. This is given as a solution to the problem of only massive objects being affected by gravity. However, momentum is the product of mass and velocity, so, by this definition, massless photons cannot have momentum.

How can photons have momentum?

How is this momentum defined (equations)?

Answer accepted (score 112)

The answer to this question is simple and requires only SR, not GR or quantum mechanics.

In units with \(c=1\), we have \(m^2=E^2-p^2\), where \(m\) is the invariant mass, \(E\) is the mass-energy, and \(p\) is the momentum. In terms of logical foundations, there is a variety of ways to demonstrate this. One route starts with Einstein’s 1905 paper “Does the inertia of a body depend upon its energy-content?” Another method is to start from the fact that a valid conservation law has to use a tensor, and show that the energy-momentum four-vector is the only tensor that goes over to Newtonian mechanics in the appropriate limit.

Once \(m^2=E^2-p^2\) is established, it follows trivially that for a photon, with \(m=0\), \(E=|p|\), i.e., \(p=E/c\) in units with \(c \ne 1\).

A lot of the confusion on this topic seems to arise from people assuming that \(p=m\gamma v\) should be the definition of momentum. It really isn’t an appropriate definition of momentum, because in the case of \(m=0\) and \(v=c\), it gives an indeterminate form. The indeterminate form can, however, be evaluated as a limit in which \(m\) approaches 0 and \(E=m\gamma c^2\) is held fixed. The result is again \(p=E/c\).

Answer 2 (score 112)

The answer to this question is simple and requires only SR, not GR or quantum mechanics.

In units with \(c=1\), we have \(m^2=E^2-p^2\), where \(m\) is the invariant mass, \(E\) is the mass-energy, and \(p\) is the momentum. In terms of logical foundations, there is a variety of ways to demonstrate this. One route starts with Einstein’s 1905 paper “Does the inertia of a body depend upon its energy-content?” Another method is to start from the fact that a valid conservation law has to use a tensor, and show that the energy-momentum four-vector is the only tensor that goes over to Newtonian mechanics in the appropriate limit.

Once \(m^2=E^2-p^2\) is established, it follows trivially that for a photon, with \(m=0\), \(E=|p|\), i.e., \(p=E/c\) in units with \(c \ne 1\).

A lot of the confusion on this topic seems to arise from people assuming that \(p=m\gamma v\) should be the definition of momentum. It really isn’t an appropriate definition of momentum, because in the case of \(m=0\) and \(v=c\), it gives an indeterminate form. The indeterminate form can, however, be evaluated as a limit in which \(m\) approaches 0 and \(E=m\gamma c^2\) is held fixed. The result is again \(p=E/c\).

Answer 3 (score 5)

“momentum is the product of mass and velocity, so, by this definition, massless photons cannot have momentum”

This reasoning does not hold. Momentum is the product of energy and velocity.

“How is this momentum defined (equations)?”

Inserting factors of \(c\), the relativistically correct relation between momentum \(p\) and velocity \(v\) is \[c^2 p = E v\] This holds for non-relativistic massive particles (total energy dominated by rest-energy: \(E = m c^2\), and therefore \(p=mv\)) as well as for massless particles like photons (\(v = c\) and hence \(p=E/c\)).

14: What is the difference between diffraction and interference of light? (score 215956 in 2014)

Question

I know these two phenomena but I want to know a little deep explanation. What type of fringes are obtained in these phenomena?

Answer accepted (score 6)

  1. Two separate wave fronts originating from two coherent sources produce interference. Secondary wavelets originating from different parts of the same wave front constitute diffraction. Thus the two are entirely different in nature.
  2. The region of minimum intensity is perfectly dark in interference. In diffraction they are not perfectly dark.
  3. Width of the fringes is equal in interference. In diffraction they are never equal.
  4. The intensity of all positions of maxima are of the same intensity in interference. In diffraction they do vary.

Diffraction pattern

enter image description here

Interference pattern

enter image description here

Answer 2 (score 6)

  1. Two separate wave fronts originating from two coherent sources produce interference. Secondary wavelets originating from different parts of the same wave front constitute diffraction. Thus the two are entirely different in nature.
  2. The region of minimum intensity is perfectly dark in interference. In diffraction they are not perfectly dark.
  3. Width of the fringes is equal in interference. In diffraction they are never equal.
  4. The intensity of all positions of maxima are of the same intensity in interference. In diffraction they do vary.

Diffraction pattern

enter image description here

Interference pattern

enter image description here

Answer 3 (score 5)

Diffraction occurs when a wave encounters an obstacle or a slit these characteristic behaviors are exhibited when a wave encounters an obstacle or a slit that is comparable in size to its wavelength, whereas Interference is the phenomenon where waves meet each other and combine additively or substractively to form composite waves. In a sense there are similarities in the fact that both phenomena from a given wave produce other waves (with in general different frequency or phase and/or amplitude etc.). The main difference is the mechanism, diffraction involves a wave and some obstacle or object which deflects the wave or bends it and intereference involves a wave which combine with other waves. In physical experiments both these phenomena can happen and be part of the same overall phenomenon.

The (geometrical) type of fringes can be similar in some cases to both phenomena or different, i dont think there is a general pattern here (as one can check in wikipedia images).

15: How does mass leave the body when you lose weight? (score 214953 in 2019)

Question

When your body burns calories and you lose weight, obviously mass is leaving your body. In what form does it leave? In other words, what is the physical process by which the body loses weight when it burns its fuel?

Somebody said it leaves the body in the form of heat but I knew this is wrong, since heat is simply the internal kinetic energy of a lump of matter and doesn’t have anything do with mass. Obviously the chemical reactions going on in the body cause it to produce heat, but this alone won’t reduce its mass.

Answer accepted (score 109)

There’s a lot of detail you could go into with regard to this question, as is done in the other answers and comments, but I think the answer itself is pretty simple. Imagine a surface that just barely surrounds your body, as if you shrink-wrapped a body in plastic. By the law of conservation of mass (valid in non-relativistic physics), the only way your body can lose any amount of mass is for that amount of mass to pass out through the surface. So you just have to consider what bodily functions cause that to happen. I think they’ve all been identified in the comments:

  • Excretion
  • Exhaling
  • Sweating

Actually, any dead skin cells, strands of hair, etc. that fall off you would also count, although my guess is that those represent a minor contribution.

As a bonus, the “shrink-wrap view” also makes it easy to identify the ways in which you gain mass, by looking for all processes that cause matter to be drawn in through the invisible surface:

  • Eating & drinking - solids and liquids through the esophagus and gastrointestinal tract
  • Inhaling - gas through the trachea and lungs

The thing is, when most people talk about losing weight, they’re referring to a long-term average loss of mass, which means that the processes in the first list have to remove more mass over some extended period of time than the ones in the second list bring in. This clearly requires some of the preexisting mass in your body to be converted into the waste forms that you can dispose of through excretion, exhaling, and sweating. This preexisting mass generally tends to be body fat. The other answers do a pretty good job filling in the details of how the fat gets converted to waste products.

Answer 2 (score 57)

Essentially, losing of weight occurs by means of burning fuels precisely like your car does when it burns petrol and emits exhaust gases.

The only difference is that for humans that fuel is to be found in the form of sugars. The fat is what you want to get ultimately rid off, of course, but sugars are more easily processed and so this is what you are removing first.

The basic aerobic cycle is the Krebs cycle.

alt text

But to reach it first, glucose needs to be broken down first to pyruvate (by an anaerobic process of glycolysis) and then to acetyl-CoA. One can gain some energy from this reduction but not much. The real energy is hiding in the actual Krebs cycle but for it one needs (besides lots of other stuff) the mentioned acetyl-CoA and oxygen (this explains why you don’t get enough energy when not breathing properly) producing the carbon dioxide and some energy that is stored in the \(ATP\) (adenosine-triphosphate) and transported to wherever it is needed inside the cell (you are mostly interested in muscle contractions performed by muscle cells). So, you’ll burn whatever amount of sugar you have ready in the body. You’ll also lose carbon (initially stored in the glucose) by exhaling in the form of \(CO_2\). There is also additional hydrogen produced and carried away in the form of \(NADH\) and \(FADH_2\). It’s hard to estimate where it will end up though, as it is (similarly to \(ATP\)) used all over the organism.

Now, the body is not storing sugars in the form of glucose. Instead, they are stored as a glycogen (mainly in liver and muscles) which is a polysaccharide similar to starch. This is then quickly broken down to glucose as needed. But the body can keep only a small amount of glycogen (corresponding roughly to an hour of running, depending on one’s fitness).

There is another form of storage of sugar. Body can convert it to fat. This is done when there is already enough glycogen in the organism. The body fat can then be reduced to acetyl-CoA (by lipolyses and then by beta oxidation), but this requires a lot more oxygen and so is not used when glycogen is at the disposal. But with regular exercise body can be trained to also burn greater proportion of fats than glycogen (this is of course necessary for long-range runners and cyclists because there is no way they would get enough energy just from glycogen).

To get a rough idea about the amount of mass you’ll burn, read the calories content of some food. Sugar has something like 4kCal for 1g and fat 9kCal for 1g. One hour of running corresponds to something like 700kCal so if you are burning 50% sugars and 50% fat you’ll be 100g lighter. All of these numbers are just very rough estimates depending on what kind of exercise you do and the general state of your body.


Note that you’ll also lose lots of water and minerals during the exercise. But I am not counting this to the mass balance as you need to replenish those in order to be healthy. Also, gradually some muscles will form, so this will actually add weight.

Answer 3 (score 49)

When you exercise, you “burn” more glucose, the simplified reaction for which (from Wikipedia) is:

\({\rm C_6H_{12}O_6 + 6~O_2 → 6~CO_2 + 6~H_2O}\)

So when you exhale, the carbon in the carbon dioxide, and the hydrogen and the oxygen in the water vapor, came from the glucose being burned, thereby removing that mass from the body.

16: Why does a glass rod when rubbed with silk cloth aquire positive charge and not negative charge? (score 205444 in 2015)

Question

I have read many times in the topic of induction that a glass rod when rubbed against a silk cloth acquires a positive charge. Why does it acquire positive charge only, and not negative charge?

It is also said that glass rod attracts the small uncharged paper pieces when it is becomes positively charged. I understand that a positively charged glass rod attracts the uncharged pieces of paper because some of the electrons present in the paper accumulate at the end near the rod, but can’t we extend the same argument on attraction of negatively charged silk rod and the pieces of paper due to accumulation of positive charge near the end?

Answer accepted (score 10)

You might know that all matter is made up out of atoms. Now, atoms themselves have a core, or nucleus, and electrons orbiting around the nucleus. The core has positive charge, the electrons have negative charge.

When you are rubbing the glass rod with the silk cloth, electrons are stripped away from the atoms in the glass and transferred to the silk cloth. This leaves the glass rod with more positive than negative charge, so you get a net positive charge.

Why do the electrons go from glass to silk and not from silk to glass? That depends a lot on the minute details of the material. Ultimately, for every two materials there is one of them where electrons are happier. It just turns out that for glass and silk, electrons are happier at the silk cloth.

Now to your second question. Here, the important thing to note is that in your typical solid material, the positive charges, which are the cores of the atoms, cannot move around much. They are locked into a rigid structure. The tiny electrons, however, can move around much better. That’s why the glass rod can induce a net negative charge at one end of the paper clips.

EDIT: Let me add that there should also be some attraction between the silk and a bunch of paper: The electrons in the paper will be pushed away by the electrons in the silk, leaving the end of the paper that is closer to the silk with a net positive charge that then gets attracted. However, it might very well be that in your silk cloth the electrons are overall too spread out to have a strong enough attractive effect.

Answer 2 (score 4)

This is because glass is above silk in the triboelectric series (attracts electrons less than silk) and when rubbed, silk ‘takes’ its electrons. And yes, if you had a silk rod it would also attract neutral paper, because paper pieces are turned into dipoles, as you explained.

Answer 3 (score 1)

Well this can be explained by the work function of materials. Due to rubbing, heat is generated which supplies energy for removal of electrons. As the work function of the glass rod is smaller than the silk cloth, it easily loses electrons to the silk cloth which then releases energy (electron gain enthalpy) and thus ensures conservation of energy.

17: What does the magnitude of the acceleration mean? (score 205176 in 2014)

Question

I am a little confused as to what the magnitude of acceleration is and what it means.

Answer accepted (score 9)

Your question is kind of vague but I will try to respond. Acceleration is defined as the time rate of change of velocity. Since velocity has both magnitude and direction, so does acceleration. In other words, acceleration is a vector. The length of the vector is its magnitude. Its direction is the direction of the vector. So the magnitude of acceleration is the magnitude of the acceleration vector while the direction of the acceleration is the direction of the acceleration vector. This is, of course, true of all physical quantities defined as having a magnitude and a direction. As an example, if a car is traveling north and accelerating at a rate of 10 feet per second per second, then the magnitude of the acceleration is 10 feet per second per second and the direction of the acceleration is north. If the car was traveling south but accelerating at the same rate, then the magnitude of its acceleration vector would be the same but its direction would be south.

Answer 2 (score 4)

Acceleration is simply a rate of change of velocity.

So the magnitude tells you, how quickly velocity changes.

Answer 3 (score 2)

If you are talking about linear motion, then the magnitude of acceleration is simply a measurement of change in speed per unit time. As an example, say you are in a car starting from rest and you begin to speed up. Say that you reach a speed of \(20 {m \over s}\) in \(2\) seconds. This means the magnitude of your acceleration is: \[ a = {20 {m \over s} \over 2s} = 10 {m \over s^2}\] That is, your speed changed by \(20 {m \over s}\) every \(2\) seconds, or \(10 {m \over s}\) every second. Thus, when we talk about the magnitude of acceleration, we are talking about how quickly your speed changes in a given unit of time.

It is important to note that this is only the magnitude of acceleration. Acceleration is a vector, meaning it has both magnitude and direction. Therefore, the magnitude only describes part of any accelerated motion. Also, as is pointed out in a comment below a more precise definition of acceleration is needed when talking about nonlinear motion.

18: What is the difference between phase difference and path difference? (score 200506 in 2018)

Question

The path difference is the difference between the distances travelled by two waves meeting at a point. Given the path difference, how does one calculate the phase difference?

Answer accepted (score 15)

Let’s assume that, two stones are thrown at two points which are very near, then you will see the following pattern as shown in the figure below:
enter image description here

let’s mark the first point of disturbance as \(S_1\) and the other as \(S_2\), then waves will be emanated as shown above. By having a cross-sectional view, you will see the same waves as shown in the figure below (in the below explanation wavelengths of waves emanated from two different disturbances is assumed to be the same).

enter image description here

The waves emanating from \(S_1\) has arrived exactly one cycle earlier than the waves from \(S_2\). Thus, we say that, there is a path difference between the two waves of about \(\lambda\) (wavelength). If the distance traveled by the waves from two disturbance is same, then path difference will be zero. Once you know the path difference, you can find the phase difference using the formula given below:
\[\Delta{X}=\frac{\lambda\cdot\Delta{\phi}}{2\pi}\]
Here, \(\Delta{X}\) is path difference, \(\Delta{\phi}\) is phase difference.

Answer 2 (score 2)

Consider two waves coming from different places and arriving at the same point. Also, let those two waves initially be in phase (i.e when one wave is at a maximum so is the other one). Furthermore, let the wavelength of each wave be the same (i.e. the distance between consecutive maximums).

Now let the first wave travel a distance equal to ten times its wavelength. Let the second wave travel a distance equal to 10.5 times its wavelength. That means that upon arriving at the same point, if the first wave is at a maximum then the second wave is at a minimum. We call this 180 degrees out of phase.

From this simple case, we can see that if both waves travel an integer number of wavelengths before interfering with each other, then the two waves will be “in phase” (assuming that both waves started out in phase). However, if one wave travel’s some fraction of a wavelength more than the other than there will be a phase difference \(\phi=2\pi * \text{(path difference)}/\text{wavelength}\). This formula satisfies the simple example above and is easier to see if you draw out some cosine or sine waves (note that a cosine wave is just a sine wave phase shifted by \(\pi/2\).

Answer 3 (score 2)

Consider two waves coming from different places and arriving at the same point. Also, let those two waves initially be in phase (i.e when one wave is at a maximum so is the other one). Furthermore, let the wavelength of each wave be the same (i.e. the distance between consecutive maximums).

Now let the first wave travel a distance equal to ten times its wavelength. Let the second wave travel a distance equal to 10.5 times its wavelength. That means that upon arriving at the same point, if the first wave is at a maximum then the second wave is at a minimum. We call this 180 degrees out of phase.

From this simple case, we can see that if both waves travel an integer number of wavelengths before interfering with each other, then the two waves will be “in phase” (assuming that both waves started out in phase). However, if one wave travel’s some fraction of a wavelength more than the other than there will be a phase difference \(\phi=2\pi * \text{(path difference)}/\text{wavelength}\). This formula satisfies the simple example above and is easier to see if you draw out some cosine or sine waves (note that a cosine wave is just a sine wave phase shifted by \(\pi/2\).

19: Why can Hiroshima be inhabited when Chernobyl cannot? (score 200415 in 2014)

Question

There was an atomic bomb dropped in Hiroshima, but today there are residents in Hiroshima. However, in Chernobyl, where there was a nuclear reactor meltdown, there are no residents living today (or very few). What made the difference?

Answer accepted (score 53)

While they work on the same principles, the detonation of an atomic bomb and the meltdown of a nuclear plant are two very different processes.

An atomic bomb is based on the idea of releasing as much energy from a runaway nuclear fission reaction as possible in the shortest amount of time. The idea being to create as much devastating damage as possible immediately so as to nullify enemy forces or intimidate the opposing side into surrender. Both effectively ensuring the conflict ends quickly. Thus, it would be important that the area bombed does not remain uninhabitable long after the two sides make peace (Ok, that’s my own speculation, but I think it’s a nice ideal to work with).

A nuclear reactor is based on the idea of producing low amounts of power using a controlled and sustained nuclear fission reaction. The point being that it does not release all of the energy at once and slower reaction processes are used to ensure maximum lifetime of the nuclear fuel.

Moving beyond the ideas behind each, the radioactive isotopes created in an atomic blast are relatively short-lived due to the nature of the blast and the fact that they are normally detonated above the ground to increase destructive power of the concussive wave. Most radioactive materials from an atomic blast have a maximum half-life of 50 years.

However, in the Chernobyl meltdown, most of the actual exploding was due to containment failure and explosions from steam build-up. Chunks of fuel rods and irradiated graphite rods remained intact. Furthermore, the reaction has, both initially and over its life, produced a far higher amount of radioactive materials. This is partly due to the nature of the reaction, the existence of intact fuel to this date, and that the explosion happened at ground level. A fission explosion at ground level creates more radioactive isotopes due to neutron activation in soil. Furthermore, the half-lives of the isotopes made in the Chernobyl accident (because of the nature of the process) are considerably longer. It is estimated that the area will not be habitable for humans for another 20 000 years (Edit: to prevent further debate I rechecked this number. That is the time before the area within the cement sarcophagus - the exact location of the blast - becomes safe. The surrounding area varies between 20 years and several hundred due to uneven contamination).

Long story short, an atomic bomb is, like other bombs, designed to achieve the most destructive force possible over a short amount of time. The reaction process that accomplishes this ends up creating short-lived radioactive particles, which means the initial radiation burst is extremely high but falls off rapidly. Whereas a nuclear reactor is designed to utilize the full extent of fission in producing power from a slow, sustained reaction process. This reaction results in the creation of nuclear waste materials that are relatively long-lived, which means that the initial radiation burst from a meltdown may be much lower than that of a bomb, but it lasts much longer.

In the global perspective: an atomic bomb may be hazardous to the health of those nearby, but a meltdown spreads radiation across the planet for years. At this point, everyone on Earth has averaged an extra 21 days of background radiation exposure per person due to Chernobyl. This is one of the reasons Chernobyl was a level 7 nuclear event.

All of this contribute to why even though Hiroshima had an atomic bomb detonate, it is Chernobyl (and Fukushima too I’ll wager) that remains uninhabitable.

Most of the relevant info for this can be found in Wikipedia.

One further thing:
As pointed out, one thing I forgot to mention is that the amount of fissionable material in an atomic bomb is usually considerably less than the amount housed in a nuclear reactor. A standard nuclear reactor can consume \(50000lb\) (\(\sim22700kg\)) of fuel in a year, whereas little boy held significantly less (around \(100-150lb\) or \(45-70kg\)). Obviously, having more fissionable material drastically increases the amount of radiation that can be output as well as the amount of radioactive isotopes. For example, the meltdown at Chernobyl released 25 times more Iodine-129 isotope than the Hiroshima bomb (an isotope that is relatively long-lived and dangerous to humans) and 890 times more Cesium-137 (not as long lived, but still a danger while it is present).

Answer 2 (score 14)

Short answer: A nuclear power plant contains a lot more nuclear material than an atomic bomb. The “Little Boy” bomb was detonated at 1968 feet (600m) over Hiroshima with the nuclear material dispersed quickly in the air; the Chernobyl meltdown contaminated its environment for decades.

Long answer:

http://en.wikipedia.org/wiki/Background_radiation

Total doses from the Chernobyl accident ranged from 10 to 50 mSv over 20 years for the inhabitants of the affected areas, with most of the dose received in the first years after the disaster, and over 100 mSv for liquidators. There were 28 deaths from acute radiation syndrome.[30]

Total doses from the Fukushima I accidents were between 1 and 15 mSv for the inhabitants of the affected areas. Thyroid doses for children were below 50 mSv. 167 cleanup workers received doses above 100 mSv, with 6 of them receiving more than 250 mSv (the Japanese exposure limit for emergency response workers).[31]

The average dose from the Three Mile Island accident was 0.01 mSv.[32]

http://www.huffingtonpost.com/patrick-takahashi/why-worry-about-fukushima_b_847250.html

Today, the background radiation in Hiroshima and Nagasaki is the same as the average amount of natural radiation present anywhere on Earth. It is not enough to affect human health.

There was a slight increase of leukemia in the Nagasaki region, but no additional incidence of cancers anywhere in and around Hiroshima. Thus, contrary to any kind of logical sense, while the high altitude (1968 feet for Hiroshima and 1800 feet for Nagasaki) of the nuclear explosions immediately killed 200,000 people, these cities soon became safe, and are thriving today. I’m, actually, still wondering why.

But with respect to the relative long-term danger of nuclear power plants versus ATOMIC BOMBS, another article mentioned that there is a lot more fissionable material in the former compared to the latter. For example, a 1000 MW reactor uses 50,000 pounds of enriched uranium/year and produces 54,000 pounds of waste, which keeps accumulating, so in a 20-year period, there should be more than a million pounds of radioactive material on site. Little Boy had only 141 pounds of U-235, while Fat Man used 14 pounds of Pu-239.

Chernobyl released 200 times more radiation than the Hiroshima and Nagasaki bombs, combined. As far away as Scotland, the radiation rose to 10,000 times the norm. Frighteningly, the Fukushima reactors are said to be more dangerous than Chernobyl (Uranium-235) for two reasons: more enriched uranium, and Fukushima #3 has plutonium.

Answer 3 (score 13)

A quick calculation brings some of the points in the other answers into clear focus.

Consider a big power station, like Fukishima before its demise. Its output was at a whopping rate of \(5GW\).

From here I get the conversion factor that 1 kiloton of TNT equivalent is taken to be \(4.184\times 10^{12}\) joules. Assuming the Nagasaki bomb let slip 20 kiloton TNT equivalent, this is about \(8\times10^{13}J\).

Now do the calculation: how long does it take (working) Fukishima to output this much energy? Answer \(8\times10^{13} / 5\times10^9=16000s\). That is, about four and a half hours. Less than one afternoon’s output!

Now I hasten to add that I am in no way trivialising what was suffered by those at Hiroshima or Nagasaki. But in these terms, the amount of energy and consequent waste output by even a fearsome several megaton bomb is rather trivial compared to the lifetimg output of a power station. And the main contamination from a bomb tends to be lethal, but very short lived isotopes begotten by the irradiation of dirt and other matter sucked into the updraught.

20: Water pressure calculation for a volume of water at a given height (score 189833 in 2015)

Question

I’m looking to create a rain harvesting system. I have a 275 gallon IBC tote that is 48" x 40" x 46". I have an adapter for a 3/4 garden hose at the bottom of the IBC tote. I’m trying to figure out three things:

  1. What is the pressure at the bottom of the tote, assuming that the tote is full?

  2. Would the pressure from the column of water in the tote be able to reach an 8’ tall planter 40’ away via the hose?

  3. How much would the pressure increase per foot that I elevated the tank?

Answer accepted (score 3)

A. what is the pressure at the bottom of the tote, assuming that the tote is full.

Only the depth of the water matters.

\(P=\rho gh\), where \(\rho\) is the density of the water, \(g\) is the acceleration due to gravity, and \(h\) is the depth of the water.

B. would the pressure from the column of water in the tote be able to reach an 8’ tall planter 40’ away via the hose.

No, water will not rise about the height of the surface of the tote

How much would the pressure increase per foot that I elevated the tank?

\(P=\rho gh\), where \(h\) is the difference between the height of the surface of the water and the height of the point where the pressure is measure, such as the outlet of a hose coming from the tank.

Answer 2 (score 0)

In reviewing the answer supplied by troy, The explanation of his increased water pressure on the larger pipe is as follows:

The pressure from the tank is based on the height of the tank. A tank on a 25’ tower will supply at least 12.5 pounds per square inch. (we don’t know the height of the surface of the water.) The 3/4 inch pipe has an area of .44 sq in. Thus the 3/4" pipe will have a pressure of 5.5 psi. The 2" pipe has an area of 3.15 sq in. Thus it will have a pressure potential of approximately 40 psi. reducing the pipe down to 3/4" after the drop will increase pressure by about 2.6 so an estimate of the pressure on this pipe would be about 90 psi.

Answer 3 (score 0)

In reviewing the answer supplied by troy, The explanation of his increased water pressure on the larger pipe is as follows:

The pressure from the tank is based on the height of the tank. A tank on a 25’ tower will supply at least 12.5 pounds per square inch. (we don’t know the height of the surface of the water.) The 3/4 inch pipe has an area of .44 sq in. Thus the 3/4" pipe will have a pressure of 5.5 psi. The 2" pipe has an area of 3.15 sq in. Thus it will have a pressure potential of approximately 40 psi. reducing the pipe down to 3/4" after the drop will increase pressure by about 2.6 so an estimate of the pressure on this pipe would be about 90 psi.

21: How do moving charges produce magnetic fields? (score 175869 in 2014)

Question

I’m tutoring high school students. I’ve always taught them that:

A charged particle moving without acceleration produces an electric as well as a magnetic field.

It produces an electric field because it’s a charge particle. But when it is at rest, it doesn’t produce a magnetic field. All of a sudden when it starts moving, it starts producing a magnetic field. Why? What happens to it when it starts moving? What makes it produce a magnetic field when it starts moving?

Answer accepted (score 196)

If you are not well-acquainted with special relativity, there is no way to truly explain this phenomenon. The best one could do is give you rules steeped in esoteric ideas like “electromagnetic field” and “Lorentz invariance.” Of course, this is not what you’re after, and rightly so, since physics should never be about accepting rules handed down from on high without justification.

The fact is, magnetism is nothing more than electrostatics combined with special relativity. Unfortunately, you won’t find many books explaining this - either the authors mistakenly believe Maxwell’s equations have no justification and must be accepted on faith, or they are too mired in their own esoteric notation to pause to consider what it is they are saying. The only book I know of that treats the topic correctly is Purcell’s Electricity and Magnetism, which was recently re-released in a third edition. (The second edition works just fine if you can find a copy.)

A brief, heuristic outline of the idea is as follows. Suppose there is a line of positive charges moving along the \(z\)-axis in the positive direction - a current. Consider a positive charge \(q\) located at \((x,y,z) = (1,0,0)\), moving in the negative \(z\)-direction. We can see that there will be some electrostatic force on \(q\) due to all those charges.

But let’s try something crazy - let’s slip into \(q\)’s frame of reference. After all, the laws of physics had better hold for all points of view. Clearly the charges constituting the current will be moving faster in this frame. But that doesn’t do much, since after all the Coulomb force clearly doesn’t care about the velocity of the charges, only on their separation. But special relativity tells us something else. It says the current charges will appear closer together. If they were spaced apart by intervals \(\Delta z\) in the original frame, then in this new frame they will have a spacing \(\Delta z \sqrt{1-v^2/c^2}\), where \(v\) is \(q\)’s speed in the original frame. This is the famous length contraction predicted by special relativity.

If the current charges appear closer together, then clearly \(q\) will feel a larger electrostatic force from the \(z\)-axis as a whole. It will experience an additional force in the positive \(x\)-direction, away from the axis, over and above what we would have predicted from just sitting in the lab frame. Basically, Coulomb’s law is the only force law acting on a charge, but only the charge’s rest frame is valid for using this law to determine what force the charge feels.

Rather than constantly transforming back and forth between frames, we invent the magnetic field as a mathematical device that accomplishes the same thing. If defined properly, it will entirely account for this anomalous force seemingly experienced by the charge when we are observing it not in its own rest frame. In the example I just went through, the right-hand rule tells you we should ascribe a magnetic field to the current circling around the \(z\)-axis such that it is pointing in the positive \(y\)-direction at the location of \(q\). The velocity of the charge is in the negative \(z\)-direction, and so \(q \vec{v} \times \vec{B}\) points in the positive \(x\)-direction, just as we learned from changing reference frames.

Answer 2 (score 25)

Electric and magnetic fields are what the electromagnetic field ‘looks like’ from a particular (inertial) frame of reference.

Take a charged particle: In its rest frame, it appears to generate an electric field only and no magnetic field at all. From a different frame of reference (in particular one in relative motion), we’ll see the charge moving, thus a current which generates a magnetic field as well.

This does not mean that setting the particle in motion somehow flipped a switch within the particle - rather, it’s an artifact of our choice of frame of reference: Observers in relative motion will measure different strengths of electric and magnetic fields the same way they measure different velocities and momenta.

There are however invariants of the electromagnetic field, i.e. things all observers can agree upon, and in particular \[ \begin{align*} P &amp;= \mathbf {B}^2 - \mathbf E^2 \\ Q &amp;= \mathbf E\,\cdot\mathbf B \end{align*} \]

Let’s take a nonzero em field with \(P,Q=0\), i.e. \(\mathbf E^2=\mathbf B^2\) and \(\mathbf E\perp\mathbf B\;.\) An example would be a plane electromagnetic wave, which will look like a plane wave for everyone.

Now, let \(P\not=0\) but \(Q=0\;.\) Then, we can find frames of reference where either the electric (in case of \(P&gt;0\)) or the magnetic field (in case of \(P&lt;0\)) vanishes. The rest frame of our charged particle would be such a one.

For more details, you’ll need to look into the literature on special relativity.

Answer 3 (score 18)

Although Chris White’s answer to the question “Why Moving Charges Produce a Magnetic Field?” posted by a High School teacher (Claws) last year, was selected as the best answer, I think it contains several pitfalls. Chris White imagines a stream of positive charges flowing in the \(+z\) axis direction, while a test charge \(+q\) initially located at \((1,0,0)\) is moving in the opposite \((-z)\) direction with speed \(v\). Next he intends to prove that when the observer locates himself in the frame of the moving test charge, he will see, in addition to the regular electrostatic Coulomb (repulsion) force acting on the test charge, an additional repulsion in the \(+x\) direction whose origin is entirely relativistic. This happens, he says, because the original separation \(Δz_0\) between the charges (when seen from the Lab rest frame) is now contracted to \(Δz = Δz_0\sqrt{(1-v^2/c^2)}\) (The “famous” Lorentz contraction).

Consequently all the distances of the flowing charges to the test charge become smaller (as if the charge density increased) and, hence, Coulomb repulsions also increase. This excess of repulsion is the “illusory” magnetic force that the Lab observer sees when the test charge moves in the \(–z\) direction with speed \(v\).

In short: there is no intrinsic magnetic force. All is Coulomb force, seen from the Lab frame (pure electrostatic force), or seen from the moving charge frame (electrostatic plus more Coulomb repulsion). We can bypass here all the quantitative details which White also omits, but we cannot overlook the pitfalls:

  1. First there is a verbal contradiction: to notice the contracted \(Δz\), smaller than \(Δz_0\), the observer must locate himself at rest with the charge \(q\) (i.e., moving with the charge). But then at the end, White says that the new “anomalous force seemingly experienced by the charge” (i.e., the defined magnetic field), occurs “when we are observing it not in its own rest frame” (emphasis mine). So, what’s the deal? To predict the extra Coulomb (magnetic) force we have to adopt the frame of the moving charge. But to observe it we have to remain in the Lab frame, which is NOT the moving charge frame.
  2. In the same vein there is a numerical pitfall: the new (contracted) charge separation Δz observed from the frame of the moving charge is calculated as \(Δz=Δz_0\sqrt{(1-v^2/c^2)}\) where \(v\), says White, is “\(q\)’s speed in the original frame”. He should have put not \(v\) but \(2v\), since the relative velocity between the charge stream going up, \(v\), and the test charge going down, \(-v\), is \(v-(-v) = 2v\). So the contraction factor should be \(\sqrt{1-4v^2/c^2}\).
  3. Furthermore, if we use the heuristic strategy used by White, we reach a contradiction: Start with all charges at rest: the \(z\) axis full of charges and the test charge at \((1,0,0)\). Call \(Δz_0\) the separation between all charges at rest. Now allow the \(z\) axis charges to move as before, with a speed \(+v\). Already the Lab observer AND THE TEST CHARGE \(q\), will see a contraction of the separation according to \(Δz = Δz_0\sqrt{(1-v^2/c^2)}\). Hence by the same maneuvers as before, special relative must predict an additional “Coulomb” repulsion due to the compacted charge density. So the “magnetic” force, thus predicted, must act on the RESTING charge at \((1,0,0)\). And this is not observed. To the best of my knowledge, no current along the \(z\) axis can ever produce a magnetic force on a resting charge at the origin.

In conclusion: contrary to what White says, magnetism is NOT JUST electrostatics plus special relativity. Such reductionistic view converts magnetism into a superficial play-game between frames of reference.

22: How to determine the direction of induced current flow? (score 174193 in 2018)

Question

There are three ways of inducing current in a loop/coil of wire as shown in my book. We can have a magnet approach a coil of wire, or a wire approaching a magnet. Both can be understood in the same way.

enter image description here

On the other hand, we can also change the magnetic flux by pushing a loop of wire into a magnetic field.

enter image description here

This confuses me. Two vertical lines are cutting across the field. But since they are connected, the induced current, I conjecture, would cancel each other.

I saw in a YouTube video that to determine the direction in such situations as 2, one curl the fingers of their right hand along the wire, with the thumb pointing in the direction of the field. So the curled fingers are in the direction of the current. Basically the directions of current indicated by the thumb is always opposite the direction of the changing field.

Answer accepted (score 5)

The rule is called Lenz’s Law. You already appear to know how to determine the direction of the magnetic field due to a current in a loop, which is part of the answer. What Lenz’s Law tells us is that the direction of the induced current in the loop is such that it “opposes the change in the flux”.

Here’s a picture I grabbed from https://web2.ph.utexas.edu/~coker2/index.files/induction.htm to illustrate this. The B-field is pointing up and increasing. So we say there is an “induced B field” opposing this increase (so it points down). The current in the loop is such that it would create this induced field according to the usual right-hand rule for B-fields due to loops.

So imagine that the B-field was decreasing instead. So if B is up the change in the B-field is down. Thus the induced B-field would point up and the current would be opposite to what is in the diagram. Here is another diagram from the same website showing some other cases. Practice working through the reasoning of Lenz’s Law to verify that you see why the induced current is the in the direction indicated for each case:

enter image description here

A word of warning about this concept. The change in the B-field does not really induce an “induced B-field”. What it really does is create a circular E-field, and it is this which drives the current in the loop. The “induced B-field” is a fictional construct which makes it easier to figure out the direction of the current.

Answer 2 (score 1)

I think the main idea that you are missing is that the induced current is caused by the change in the magnetic “flux” (which is not the same as the magnetic field). Without resorting to any calculus (which is needed for the real definition of flux) the flux is like the magnetic field times the cross-sectional area that it “goes through” inside your coil/loop. So there are two key ideas here to be able to understand all cases:

  1. It is the flux, not the field, that matters.
  2. It is the rate of change of flux, not the absolute size of the flux, that matters.

So, in situation 1 the flux is changing because the field strength inside the loop is changing. For the magnet approaching the coil the strength of the field inside the coil is increasing, so this makes the flux also decrease. For the magnet moving away from the coil the strength of the field inside the coil is decreasing, so the flux is also decreasing. This increasing/decreasing difference is why the induced current direction depends on which way you move the magnet. Notice that if you hold the magnet stationary no current is induced (remember, it is the change in flux that matters).

Now for situation 2 maybe it is easier to think about having a coil whose diameter you can change. If you increase the diameter (holding the nearby magnet stationary) then the field strength is the same but the flux increases because the area that the field “goes through” has increased. So this would have the same effect as increasing the field, such as by moving the magnet closer. Similarly, decreasing the coil diameter will have the same effect as moving the magnet farther away.

So finally, looking at the scanned picture for your situation 2 we have a coil moving from a place where there is no field to a place where there is a field. So the flux increases as the loop moves into this region and a current will be induced. How would this be achieved in practice? This would be difficult but it would be easy to move a coil from a place with a weak field to a place with a strong field.

Hope that helps! In my opinion induced currents are hands-down the most difficult things to figure out in elementary electromagnetism, so don’t be surprised that you’re having some trouble wrapping your head around them.

Answer 3 (score 1)

I think the main idea that you are missing is that the induced current is caused by the change in the magnetic “flux” (which is not the same as the magnetic field). Without resorting to any calculus (which is needed for the real definition of flux) the flux is like the magnetic field times the cross-sectional area that it “goes through” inside your coil/loop. So there are two key ideas here to be able to understand all cases:

  1. It is the flux, not the field, that matters.
  2. It is the rate of change of flux, not the absolute size of the flux, that matters.

So, in situation 1 the flux is changing because the field strength inside the loop is changing. For the magnet approaching the coil the strength of the field inside the coil is increasing, so this makes the flux also decrease. For the magnet moving away from the coil the strength of the field inside the coil is decreasing, so the flux is also decreasing. This increasing/decreasing difference is why the induced current direction depends on which way you move the magnet. Notice that if you hold the magnet stationary no current is induced (remember, it is the change in flux that matters).

Now for situation 2 maybe it is easier to think about having a coil whose diameter you can change. If you increase the diameter (holding the nearby magnet stationary) then the field strength is the same but the flux increases because the area that the field “goes through” has increased. So this would have the same effect as increasing the field, such as by moving the magnet closer. Similarly, decreasing the coil diameter will have the same effect as moving the magnet farther away.

So finally, looking at the scanned picture for your situation 2 we have a coil moving from a place where there is no field to a place where there is a field. So the flux increases as the loop moves into this region and a current will be induced. How would this be achieved in practice? This would be difficult but it would be easy to move a coil from a place with a weak field to a place with a strong field.

Hope that helps! In my opinion induced currents are hands-down the most difficult things to figure out in elementary electromagnetism, so don’t be surprised that you’re having some trouble wrapping your head around them.

23: Does the rotation of the earth dramatically affect airplane flight time? (score 173118 in )

Question

Say I’m flying from Sydney, to Los Angeles (S2LA), back to Sydney (LA2S).

During S2LA, travelling with the rotation of the earth, would the flight time be longer than LA2S on account of Los Angeles turning/moving away from our position?

Or, in the opposite direction, would the flight to Sydney be faster since the Earth turns underneath us and moves Sydney closer?

===

  • Please ignore jet stream effects and all other variables; this is a control case in an ideal environment.

  • By “dramatically” I suppose I mean a delay of 1 hour or more.

Answer accepted (score 28)

During the flight, you need to get up to use the restroom. There’s one 10 rows in front of you, and another 10 rows behind you. Does it take longer to walk to the one that’s moving away from you at 600 mph than the one that’s moving towards you at 600 mph?

No, because you’re moving at 600 mph right along with it – in the ground-based frame of reference. In the frame of reference of the airplane, everything is stationary.

Similarly, the airplane is already moving along with the surface of the Earth before it takes off. The rotation of the Earth has no direct significant effect on flight times in either direction.

That’s to a first order approximation. As others have already said, since the Earth’s surface is (very nearly) spherical and is rotating rather than moving linearly, Coriolis effects can be significant. But prevailing winds (which themselves are caused by Coriolis and other effects) are more significant that any direct Coriolis effect on the airplane.

Answer 2 (score 12)

When an airplane starts in any direction, its velocity with respect to any reference frame automatically gets the contribution from the moving Earth’s surface.

Equivalently, you may look at the whole situation from the Earth surface’s viewpoint and then the Earth’s rotation is invisible and can’t influence the speed and timing of flights, because of the principle of relativity. In this idealized description, there’s no difference. This conclusion would be right when we neglected the atmosphere, mostly for e.g. the rockets that spend most of the time outside the atmosphere.

However, there exists the atmosphere and it has winds - whose average speed ultimately depends on the Earth’s spinning as well but the dependence is indirect. In moderate zones, the westerlies dominate – winds from the West

http://en.wikipedia.org/wiki/Westerlies

and because the airplane flies in the atmosphere and wants to reach a particular speed relatively to the air mass, it’s clear that the speed of westerlies helps you when to speed you up when you fly from the West, and slows you down when you fly to the West. That’s why flights from America to Europe (or from Sydney to California) are about 1 hour faster than the opposite flights.

Answer 3 (score 7)

The rotation of earth causes two effects: Centrifugal force and coriolis force.

The effect of centrifugal force is exactly balanced out by the fact that the earth is non-spherical (it bulges at the equator). The whole surface of earth is an isopotential surface with respect to “gravity plus centrifugal force”. The downward force that pulls on everything, that people lazily call “gravity”, is really “gravity plus centrifugal force” on earth. There is no other special or surprising effects of centrifugal force on earth. Things get pulled down towards the ground like we intuitively expect. [Update–As pointed out in the comments, gravity is weaker at the equator, but only by a fraction of a percent; I doubt that measurably affects airplane speed.]

Coriolis force has a very important and very indirect effect on air travel because it alters winds, weather, and in particular the direction of the jet stream. (See Luboš’s answer.) As for direct effects on the airplane, it is negligibly small. The plane experiences a force pushing it rightward (in the northern hemisphere), about 300X weaker than gravity or more (if I calculated correctly). The pilot steers very slightly leftward to compensate, maybe not even consciously.

24: How can momentum but not energy be conserved in an inelastic collision? (score 172744 in 2016)

Question

In inelastic collisions, kinetic energy changes, so the velocities of the objects also change.

So how is momentum conserved in inelastic collisions?

Answer accepted (score 38)

I think all of the existing answers miss the real difference between energy and momentum in an inelastic collision.

We know energy is always conserved and momentum is always conserved so how is it that there can be a difference in an inelastic collision?

It comes down to the fact that momentum is a vector and energy is a scalar.

Imagine for a moment there is a “low energy” ball traveling to the right. The individual molecules in that ball all have some energy and momentum associated with them: low energy ball traveling to the right

The momentum of this ball is the sum of the momentum vectors of each molecule in the ball. The net sum is a momentum pointing to the right. You can see the molecules in the ball are all relatively low energy because they have a short tail.

Now after a “simplified single ball” inelastic collision here is the same ball:

high energy ball traveling to the right

As you can see, each molecule now has a different momentum and energy but the sum of all of all of their momentums is still the same value to the right.

Even if the individual moment of every molecule in the ball is increased in the collision, the net sum of all of their momentum vectors doesn’t have to increase.

Because energy isn’t a vector, increasing the kinetic energy of molecules increases the total energy of the system.

This is why you can convert kinetic energy of the whole ball to other forms of energy (like heat) but you can’t convert the net momentum of the ball to anything else.

Answer 2 (score 34)

So how is momentum conserved in inelastic collisions?

It is a basic law of physics that momentum is always conserved - there is no known exception. Kinetic energy does not need to be conserved, because it can turn into other forms of energy - for example potential energy or internal/thermal energy (“heat”). Momentum can also turn into other form of momentum - momentum of the EM field - but the amount of momentum so transformed seems negligible in ordinary collisions of macroscopic bodies.

Answer 3 (score 19)

Energy and momentum are always conserved. Kinetic energy is not conserved in an inelastic collision, but that is because it is converted to another form of energy (heat, etc.). The sum of all types of energy (including kinetic) is the same before and after the collision.

25: Finding the force of friction of a moving object and its change when it accelerates to a constant speed (score 172526 in 2018)

Question

If an object is moving at a constant speed the force of friction must equal the applied (horizontal) force, and for it to be accelerating or decelerating, the force of friction and the applied force must be unequal. Also, I know that \(f = \mu N\).

This is what I do not understand: if the applied force is greater than the friction, wouldn’t that mean the object would continue to accelerate infinitely? Shouldn’t the friction force change to equal the applied force to prevent this? If so, how do I work out how the friction force \(f\) changes?

Here is a sample situation: say I have a box with mass \(10\) kg and I apply a horizontal force \(50\) N, and the coefficient of kinetic friction is \(0.5\). How long does it take for the box to finish accelerating and reach a constant velocity? Then, if I increase the force to \(60\) N, how long does it take to reach constant velocity again?

Answer accepted (score 4)

This might be more detailed than you want; I apologize in advance.

There are two forms of friction:

  1. static friction The force of friction exerted on an object when it is at rest.

  2. kinetic friction The force of friction exerted on an object when it is in motion.

These two forms of friction have qualitatively properties. Specifically, the force of kinetic friction depends only on the magnitude of the normal force \(F_N\) exerted on the moving object and the coefficient of kinetic friction \(\mu_k\) of the surface on which it is moving. In fact, as you point at the magnitude of the force of kinetic friction as given by \[ F_k = \mu_k F_N \] The force of static friction, on the other hand, changes depending on the other external forces on the object.

To understand why, think of a box sitting still on a horizontal table. The box will not feel a friction force in the absence of any other force (if it did, then it would accelerate). However, if you start exerting a small enough force on the box, it still will not move, and in this case, the static friction force is exactly counterbalancing the force you exert. If you push hard enough, however, the box will eventually start sliding. This illustrates that the static friction force can have any value between zero, and some maximum which turns out to be given by \(\mu_s F_N\) where \(\mu_s\) is the coefficient of static friction. Mathematically, this can be expressed by the following equation: \[ F_s \leq \mu_s F_N \] where \(F_s\) is the magnitude of the static friction force.

Having said all of this, let me reiterate that kinetic friction always has the magnitude \(\mu_k F_N\) regardless of the state of motion of the object. If you continuously push an object with a force greater than this value, then it will keep accelerating forever. In order for the acceleration to halt, you would need to reduce the applied force so that it equals the force of static friction.

Lastly, given an applied force \(F\), the acceleration of the object will satisfy Newton’s second law which says that the net applied force equals the mass of the object multiplied by is acceleration; \[ F - F_k = ma \] The acceleration of the object is the rate of change of its velocity, so determine the velocity as a function of time you would, in general, have to solve the following differential equation: \[ \frac{dv}{dt} = \frac{1}{m}(F - F_k) \]

Answer 2 (score 1)

You are actually completely right, and then at the last minute you look the wrong way!!!

Lets examine the last bit. If you were to continue applying a net force on an object for an infinite time it would indeed accelerate infinitely. There is nothing wrong here. The key point is the NET force, that is, the net amount of force, and its direction, that is applied due to the acting friction (resisting, negative) and the pushing you do(pushing, positive). Also, if you were to stop pushing on your object then your net force is only due of friction; this will cause it to slow down (accelerating, negative). LIke you initially said you gotta balance the friction and how hard you push to get that constant speed. If you don’t get the balance right, and you continuing pushing forever, your speed will increase/decrease forever!!!

Answer 3 (score 0)

F=M x A= F
25 x 25=6.25

For the explanation is,the formula must be F=M x A=F

For example 25 kg. is the mass and .25.00.00 is the acceleration of the speed limit.Now you must multiply the two 25 and the answer is 6.25

26: Why doesn’t the Moon fall onto the Earth? (score 172103 in 2018)

Question

Why doesn’t the Moon fall onto the Earth? For that matter, why doesn’t anything rotating a larger body ever fall onto the larger body?

Answer accepted (score 103)

The moon does not fall to Earth because it is in an orbit.

One of the most difficult things to learn about physics is the concept of force. Just because there is a force on something does not mean it will be moving in the direction of the force. Instead, the force influences the motion to be a bit more in the direction of the force than it was before.

For example, if you roll a bowling ball straight down a lane, then run up beside it and kick it towards the gutter, you apply a force towards the gutter, but the ball doesn’t go straight into the gutter. Instead it keeps going down the lane, but picks up a little bit of diagonal motion as well.

Imagine you’re standing at the edge of a cliff 100m tall. If you drop a rock off, it will fall straight down because it had no velocity to begin with, so the only velocity it picks up is downward from the downward force.

If you throw the rock out horizontally, it will still fall, but it will keep moving out horizontally as it does so, and falls at an angle. (The angle isn’t constant - the shape is a curve called a parabola, but that’s relatively unimportant here.) The the force is straight down, but that force doesn’t stop the rock from moving horizontally.

If you throw the rock harder, it goes further, and falls at a shallower angle. The force on it from gravity is the same, but the original velocity was much bigger and so the deflection is less.

Now imagine throwing the rock so hard it travels one kilometer horizontally before it hits the ground. If you do that, something slightly new happens. The rock still falls, but it has to fall more than just 100m before it hits the ground. The reason is that the Earth is curved, and so as the rock traveled out that kilometer, the Earth was actually curving away underneath of it. In one kilometer, it turns out the Earth curves away by about 10 centimeters - a small difference, but a real one.

As you throw the rock even harder than that, the curving away of the Earth underneath becomes more significant. If you could throw the rock 10 kilometers, the Earth would now curve away by 10 meters, and for a 100 km throw the Earth curves away by an entire kilometer. Now the stone has to fall a very long way down compared to the 100m cliff it was dropped from.

Check out the following drawing. It was made by Isaac Newton, the first person to understand orbits. IMHO it is one of the greatest diagrams ever made.

enter image description here

What it shows is that if you could throw the rock hard enough, the Earth would curve away from underneath the rock so much that the rock actually never gets any closer to the ground. It goes all the way around in the circle and might hit you in the back of the head!

This is an orbit. It’s what satellites and the moon are doing. We can’t actually do it here close to the surface of the Earth due to wind resistance, but on the surface of the moon, where there’s no atmosphere, you could indeed have a very low orbit.

This is the mechanism by which things “stay up” in space.

Gravity gets weaker as you go further out. The Earth’s gravity is much weaker at the moon than at a low-earth orbit satellite. Because gravity is so much weaker at the moon, the moon orbits much more slowly than the International Space Station, for example. The moon takes one month to go around. The ISS takes a few hours. An interesting consequence is that if you go out just the right amount in between, about six Earth radii, you reach a point where gravity is weakened enough that an orbit around the Earth takes 24 hours. There, you could have a “geosynchronous orbit”, a satellite that orbits so that it stays above the same spot on Earth’s equator as Earth spins.

Although gravity gets weaker as you go further out, there is no cut-off distance. In theory, gravity extends forever. However, if you went towards the sun, eventually the sun’s gravity would be stronger than the Earth’s, and then you wouldn’t fall back to Earth any more, even lacking the speed to orbit. That would happen if you went about .1% of the distance to the sun, or about 250,000 km, or 40 Earth radii. (This is actually less than the distance to the moon, but the moon doesn’t fall into the Sun because it’s orbiting the sun, just like the Earth itself is.)

So the moon “falls” toward Earth due to gravity, but doesn’t get any closer to Earth because its motion is an orbit, and the dynamics of the orbit are determined by the strength of gravity at that distance and by Newton’s laws of motion.

note: adapted from an answer I wrote to a similar question on quora

Answer 2 (score 26)

Moon is continuously falling towards earth but missing all the time! Same with other planets too.

In general, in an inverse square central force field one can calculate the trajectory of a particle and verify that the trajectory is either a parabola or ellipse or hyperbola (conic sections) depending upon the initial position and initial momentum of the particle. For a two body system with certain initial conditions, it is a stable elliptical orbit. In case of the sun and the earth it is an ellipse (ignoring gravitation of other objects and also ignoring the relativistic precision of orbit).

This page has a nice video.

Answer 3 (score 9)

The truth is that the moon IS constantly trying to fall upon the earth, due to the force of gravity; but it is constantly missing, due to its tangential velocity.

To understand this, think of whirling a rock, tied to the end of a string, around and around, with your hand just above your head. As the rock travels in circles it is constantly being pulled toward you by the force on the string (which is like Earth’s pull of gravity on the moon). Why doesn’t the rock come bonk you on the head, if you are constantly pulling it toward your head? The answer is that the rock is always trying to change its velocity vector to come do just that; but the change is only enough to just keep it in a circular path, like the pull on the Moon is just enough to keep it in a circular orbit around Earth.

27: How to avoid getting shocked by static electricity? (score 171216 in 2017)

Question

sometimes I get “charged” and the next thing I touch something that conducts electricity such as a person, a car, a motal door, etc I get shocked by static electricity.

I’m trying to avoid this so if I suspect being “charged” I try to touch something that does not conduct electricity (such as a wooden table) as soon as possible, in the belief that this will “uncharge me”.

  • Is it true that touching wood will uncharge you?
  • How and when do I get charged? I noticed that it happens only in parts of the years, and after I get out of the car…

Answer accepted (score 31)

My brother, an electrical engineer, used to carry around a 1 megaohm resistor during the dry winter months when you easily get a shock after walking across a carpet and touching a light switch or another person. If you hold one lead of the resistor in your hand and touch the light switch or whatever else you are touching with the other the discharge turns from a nasty shock into a very mild and kind of amusing fizzle. It’s kind of fun and I’m sure you can find a resistor with large enough resistance lying around in your local physics lab, so give it a try! Of course I don’t know anybody geeky enough to actually use this as a practical solution.

Answer 2 (score 26)

Carry some metal in your pocket. When you suspect you are carrying an electric charge, take the metal (a coin?) out of your pocket and touch it to something grounded.

Answer 3 (score 8)

Free or nearly-free electrons on stuff like wool are getting rubbed off onto you so that your body holds some sort of net total electric charge. When you touch a metal door (or any piece of metal) then the electrons want to spread out to balance themselves between you and the metal. Since the metal conducts electricity very well, they fly off you very quickly which heats up the air in between you and the metal, giving you the painful “shock” feeling.

Wood won’t uncharge you very well, since it doesn’t conduct electricity very well. The only ways to prevent getting shocked are either not building up charge in the first place, or constantly touching metal so that the charges get released way before they can build up - in effect spreading out your shocks to many smaller shocks you can’t feel.

There are many ways to build up a static charge, but it is generally much, much easier when the air is dry. Since cold air is drier, this means you probably build up a charge more quickly in the winter.

As for how you build up charge in the first place, it is usually by rubbing certain materials together. Plastic or rubber rubbing against wool carpeting or clothes (or any sort of hair) will do it, which is the most common cause for people in their day-to-day lives. If you want to stop it, you could consider using dryer sheets, which use a substance which happens to be conductive to soften your clothes - this makes you constantly discharge as you touch your own clothes, so you achieve the “many tiny shocks” method alluded to above. Or, you could use a humidifier in your home, which adds water to the air, making static much harder to build up.

28: Why does wavelength change as light enters a different medium? (score 167040 in 2012)

Question

When light waves enter a medium of higher refractive index than the previous, why is it that:

Its wavelength decreases? The frequency of it has to stay the same?

Answer accepted (score 27)

(This is an intuitive explanation on my part, it may or may not be correct)

Symbols used: \(\lambda\) is wavelength, \(\nu\) is frequency, \(c,v\) are speeds of light in vacuum and in the medium.

Alright. First, we can look at just frequency and determine if frequency should change on passing through a medium.

Frequency can’t change

Now, let’s take a glass-air interface and pass light through it. (In SI units) In one second, \(\nu\) “crest”s will pass through the interface. Now, a crest cannot be distroyed except via interference, so that many crests must exit. Remember, a crest is a zone of maximum amplitude. Since amplitude is related to energy, when there is max amplitude going in, there is max amplitude going out, though the two maxima need not have the same value.

Also, we can directly say that, to conserve energy (which is dependent solely on frequency), the frequency must remain constant.

Speed can change

There doesn’t seem to be any reason for the speed to change, as long as the energy associated with unit length of the wave decreases. It’s like having a wide pipe with water flowing through it. The speed is slow, but there is a lot of mass being carried through the pipe. If we constrict the pipe, we get a jet of fast water. Here, there is less mass per unit length, but the speed is higher, so the net rate of transfer of mass is the same.

In this case, since \(\lambda\nu=v\), and \(\nu\) is constant, change of speed requires change of wavelength. This is analogous to the pipe, where increase of speed required decrease of cross-section (alternatively mass per unit length)

Why does it have to change?

Alright. Now we have established that speed can change, lets look at why. Now, an EM wave(like light), carries alternating electric and magnetic fields with it. enter link description here Here’s an animation. Now, in any medium, the electric and magnetic fields are altered due to interaction with the medium. Basically, the permittivities/permeabilities change. This means that the light wave is altered in some manner. Since we can’t alter frequency, the only thing left is speed/wavelength (and amplitude, but that’s not it as we shall see)

Using the relation between light and permittivity/permeability (\(\mu_0\varepsilon_0=1/c^2\) and \(\mu\varepsilon=1/v^2\)), and \(\mu=\mu_r\mu_0,\varepsilon=\varepsilon_r\varepsilon_0, n=c/v\) (n is refractive index), we get \(n=\sqrt{\mu_r\epsilon_r}\), which explicitly states the relationship between electromagnetic properties of a material and its RI.

Basically, the relation \(\mu\varepsilon=1/v^2\) guarantees that the speed of light must change as it passes through a medium, and we get the change in wavelength as a consequence of this.

Answer 2 (score 2)

The energy of the light is related to the frequency; when the light enters the medium there are interference patterns that cause the apperent speed of light to change; if the frequency changed, the energy would not be conserved. The wavelength changes to balance the change in speed.

Answer 3 (score 1)

Here is a slightly different take on this using the boundary conditions for electromagnetic fields at an interface.

A key boundary condition, that is derived from Faraday’s law, is that the component of the E-field tangential to the boundary must be continuous.

So take an EM wave travelling at normal incidence with the electric field solely in a direction tangential to the boundary. Let’s represent it as \({\bf E} = E_I \sin (\omega t - kx) \hat{j}\), where I have chosen that the wave travels towards positive \(x\) and is polarised in the \(y\) direction.

Let the interface be the plane at \(x=0\).

The continuity condition then demands that the E-field of the incident wave plus the E-field of the reflected wave must equal the E-field of the transmitted wave, all at \(x=0\). This is a condition that must be satisfied for all value of \(t\).

Hence \[ E_I \sin (\omega_I t) + E_{R} \sin (\omega_R t) = E_T \sin (\omega_T t)\]

For time-invariant E-field amplitudes, the only way this can be true for all \(t\) is if \(\omega_I = \omega_R = \omega_T\). i.e. the frequency of the transmitted wave is the same as that of the incident wave. Given that the speed of light in a medium is changed (for reasons explained in Manishearth’s answer), then the wavelength of light in the medium must also change.

29: What is the sign of the work done on the system and by the system? (score 166044 in 2018)

Question

What is the sign of the work done on the system and by the system?

My chemistry book says when work is done on the system, it is positive. When work is done by the system, it is negative.

My physics book says the opposite. It says that when work is done on the system, it is negative. When work is done by the system, it is positive.

Why do they differ?

Answer accepted (score 11)

It is just a matter of convention. It should be consistent throughout.

Case 1: Work done on the system is positive.

Here the first law is written as \[ \mathrm{d}U = \mathrm{d}Q + \mathrm{d}W \,.\tag{1}\]

If your frame of reference is “system”, then the work done on the system (\(W\)) is positive and the heat that is added to the system is also positive, which means the change in internal energy is also positive by first law of thermodynamics, which means that there is an increase in temperature. This appeals to common sense. Here positive change in internal energy corresponds to increase in temperature

Case 2: Work done by the system is positive

Here the first law is written as \[ \mathrm{d}U = \mathrm{d}Q - \mathrm{d}W \,. \tag{2}\]

If work is applied to the system, \(\mathrm{d}W\) term becomes negative making two negatives positive, which is identical to equation (1) and heat added to the system is still positive here. Rest of the arguments follow as above.

Answer 2 (score 2)

It’s just a convention in physics we are more interested in getting some work output say a mechanical device , engine etc while in chemistry we are more concerned with the internal energy things so we do so in both the cases the result is same physics case : du = dq - dw , doing work on system increases internal energy as dw = negative for work done on system and vice versa

chemistry case : du = dq + dw ; doing work on system will increase the internal energy of the system as dw = positive which is obvious and vice versa.

we can take any sign convention in a given problem but we should be consistent with that throughout the problem to avoid confusion and mistake.

Answer 3 (score 2)

It’s just a convention in physics we are more interested in getting some work output say a mechanical device , engine etc while in chemistry we are more concerned with the internal energy things so we do so in both the cases the result is same physics case : du = dq - dw , doing work on system increases internal energy as dw = negative for work done on system and vice versa

chemistry case : du = dq + dw ; doing work on system will increase the internal energy of the system as dw = positive which is obvious and vice versa.

we can take any sign convention in a given problem but we should be consistent with that throughout the problem to avoid confusion and mistake.

30: How does the freezing temperature of water vary with respect to pressure? (score 164523 in 2019)

Question

I know when the pressure is reduced, the boiling temperature of water is reduced as well. But how does the pressure affect the freezing point of water?

In a low-pressure environment, is water’s freezing temperature higher or lower than \(0\sideset{^{\circ}}{}{\mathrm{C}} \, ?\)

Answer accepted (score 10)

If you decrease the pressure, the freezing point of water will increase ever so slightly. From 0° C at 1 atm pressure it will increase up to 0.01° C at 0.006 atm. This is the tripple point of water. At pressures below this, water will never be liquid. It will change directly between solid and gas phase (sublimation). The temperature for this phase change, the sublimation point, will decrease as the pressure is further decreased. To learn more details, image google “water phase diagram” and study the pictures.

Answer 2 (score 10)

If you decrease the pressure, the freezing point of water will increase ever so slightly. From 0° C at 1 atm pressure it will increase up to 0.01° C at 0.006 atm. This is the tripple point of water. At pressures below this, water will never be liquid. It will change directly between solid and gas phase (sublimation). The temperature for this phase change, the sublimation point, will decrease as the pressure is further decreased. To learn more details, image google “water phase diagram” and study the pictures.

Answer 3 (score 3)

Here is an interesting article that shows how water was frozen at high temperature under pressure

http://www.azom.com/news.aspx?newsID=8016

Here is an extract Sandia Convert Water to Ice in Nanoseconds Published on March 19, 2007 at 1:15 AM Sandia’s huge Z machine, which generates temperatures hotter than the sun, has turned water to ice in nanoseconds.

“The three phases of water as we know them — cold ice, room temperature liquid, and hot vapor — are actually only a small part of water’s repertory of states,” says Sandia researcher Daniel Dolan. “Compressing water customarily heats it. But under extreme compression, it is easier for dense water to enter its solid phase [ice] than maintain the more energetic liquid phase [water].” Sandia is a National Nuclear Security Administration (NNSA) laboratory.

“Apparently it’s virtually impossible to keep water from freezing at pressures beyond 70,000 atmospheres,” Dolan says.

31: Why do we use Root Mean Square (RMS) values when talking about AC voltage (score 161710 in 2012)

Question

What makes it a good idea to use RMS rather than peak values of current and voltage when we talk about or compute with AC signals.

Answer accepted (score 19)

Attempts to find an average value of AC would directly provide you the answer zero… Hence, RMS values are used. They help to find the effective value of AC (voltage or current).

This RMS is a mathematical quantity (used in many math fields) used to compare both alternating and direct currents (or voltage). In other words (as an example), the RMS value of AC (current) is the direct current which when passed through a resistor for a given period of time would produce the same heat as that produced by alternating current when passed through the same resistor for the same time.

Practically, we use the RMS value for all kinds of AC appliances. The same is applicable to alternating voltage also. We’re taking the RMS because AC is a variable quantity (consecutive positives and negatives). Hence, we require a mean value of their squares thereby taking the square root of sum of their squares…

Peak value is \(I_0^2\) is the square of sum of different values. Hence, taking an average value (mean) \(I_0^2/2\) and then determining the square root \(I_0/\sqrt{2}\) would give the RMS.


It’s example time: (I think you didn’t ask for the derivation of RMS)

Bulbs

Consider that both the bulbs are giving out equal-level of brightness. So, They’re losing the same amount of heat (regardless the fact of AC or DC). In order to relate both, we have nothing to use better than the RMS value. The direct voltage for the bulb is 115 V while the alternating voltage is 170 V. Both give the same power output. Hence, \(V_{rms}=V_{dc}=\frac{V_{ac}}{\sqrt{2}}=115 V\) (But Guys, Actual RMS is 120 V). As I can’t find a good image, I used the same approximating 120 to 115 V.

RMS


To further clarify your doubt regarding the peak value, It’s simply similar to finding the distance between two points \((x_1,y_1)\) and \((x_2,y_2)\) in Cartesian system represented as, (sum of squares & then “root”) \[d=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}\]

Answer 2 (score 16)

What makes it a good idea to use RMS rather than peak values

The rms value, not the peak value, is the equivalent DC value that gives the same average power.

Recall that power is the product of voltage and current:

\(p(t) = v(t) \cdot i(t)\)

For a resistor, we have:

\(p(t) = R[i(t)]^2\)

To find the average power, we must take the time average of both sides:

$p_{avg} = R $

You’ll recognize the fraction on the right hand side as the mean of the square of \(i(t)\).

Denoting \(i_{rms}\) (the root of the mean of the square) as:

\(i_{rms} = \sqrt{\dfrac{\int_{T_1}^{T_2} {[i(t)]}^2\, dt}{T_2-T_1}}\)

we have:

\(p_{avg} = R[i_{rms}]^2\)

For DC, we have:

\(p = R I^2\)

So, we see that the rms value of the time varying current produces the same average power, for a given resistor, as a constant current of that value.

This is what makes the rms value “a good idea”.

Answer 3 (score 4)

In many applications we are interested in the power. For example your electricity bill is based on the power you consume. For a DC source the power is:

\[ W = VI = \frac{V^2}{R} \]

and for an AC source (assuming a resistive load so the voltage and current stay in phase):

\[ W = V_{rms}I_{rms} = \frac{V_{rms}^2}{R} \]

So using the RMS values makes the power easy to calculate. The RMS values are, in a sense, the equivalent to the values in a DC circuit.

32: Why do two bodies of different masses fall at the same rate (in the absence of air resistance)? (score 160219 in 2014)

Question

I’m far from being a physics expert and figured this would be a good place to ask a beginner question that has been confusing me for some time.

According to Galileo, two bodies of different masses, dropped from the same height, will touch the floor at the same time in the absence of air resistance.

BUT Newton’s second law states that \(a = F/m\), with \(a\) the acceleration of a particle, \(m\) its mass and \(F\) the sum of forces applied to it.

I understand that acceleration represents a variation of velocity and velocity represents a variation of position. I don’t comprehend why the mass, which is seemingly affecting the acceleration, does not affect the “time of impact”.

Can someone explain this to me? I feel pretty dumb right now :)

Answer accepted (score 19)

it is because the Force at work here (gravity) is also dependent on the mass

gravity acts on a body with mass m with

\[F = mg\]

you will plug this in to \[F=ma\] and you get

\[ma = mg\] \[a = g\]

and this is true for all bodies no matter what the mass is. Since they are accelerated the same and start with the same initial conditions (at rest and dropped from a height h) they will hit the floor at the same time.

This is a peculiar aspect of gravity and underlying this is the equality of inertial mass and gravitational mass (here only the ratio must be the same for this to be true but Einstein later showed that they’re really the same, i.e. the ratio is 1)

Answer 2 (score 18)

Newton’s gravitational force is proportional to the mass of a body, \(F=\frac{GM}{R^2}\times m\), where in the case you’re thinking about \(M\) is the mass of the earth, \(R\) is the radius of the earth, and \(G\) is Newton’s gravitational constant.

Consequently, the acceleration is \(a=\frac{F}{m}=\frac{GM}{R^2}\), which is independent of the mass of the object. Hence any two objects that are subject only to the force of gravity will fall with the same acceleration and hence they will hit the ground at the same time.

What I think you were missing is that the force \(F\) on the two bodies is not the same, but the accelerations are the same.

Answer 3 (score 18)

Newton’s gravitational force is proportional to the mass of a body, \(F=\frac{GM}{R^2}\times m\), where in the case you’re thinking about \(M\) is the mass of the earth, \(R\) is the radius of the earth, and \(G\) is Newton’s gravitational constant.

Consequently, the acceleration is \(a=\frac{F}{m}=\frac{GM}{R^2}\), which is independent of the mass of the object. Hence any two objects that are subject only to the force of gravity will fall with the same acceleration and hence they will hit the ground at the same time.

What I think you were missing is that the force \(F\) on the two bodies is not the same, but the accelerations are the same.

33: Which direction does air flow? (score 160080 in 2011)

Question

I remember learning this in high school, but have forgotten it, and can’t seem to find it anywhere online.

Air travels from areas of high pressure to low pressure…correct? So if I have a cold room in my house, does the air move from the warm rooms to the cold room or the other way around?

Answer accepted (score 16)

Air does indeed flow from high pressure to low pressure area (see the wind arrows on a weather chart), but in the case of two rooms the much more important effect is that of warm thinner air rising towards the ceiling when the air from the two rooms gets mixed.

Thus, cold air from the cold room will be leaving the room close to the floor (if the temperature difference is large enough you can actually feel it, otherwise you can use a candle to detect the direction of air movement). At the same time, warm air from the warm room will rise and move into the cold room close to the ceiling (again, sometimes you can feel this otherwise you can detect it with a candle). Similar air movements take place between your house and the outside when you open your house’s door in winter or summer.

The tendency of warm air to rise towards the ceiling is actually exploited in floor heating. This was understood and taken advantage of already by the ancient Romans, see hypocaust.

Answer 2 (score 4)

Actually, cold air just sinks while hot air rises (due to density of the collection of particles)… this “flowing” is just the wind pushing particles around, so you’ll sense a change in temperature… if we want to talk about heat transfer (which I believe the question is really getting at), then “cold disappears” by “hot entering”… think about it, temperature in some sense is related to the speed of the collection of particles.

Answer 3 (score 2)

Cold air flows downward according to hot air because it is more dense and sinks while hot air rises.

34: What is the relationship between force and kinetic energy? (score 159125 in 2018)

Question

I have learned that \[\mathbf{F} = m \mathbf{a}\] where both the force \(\mathbf{F}\) and acceleration \(\mathbf{a}\) are vectors. This makes sense, since both force and acceleration have a direction.

On the other hand, the kinetic energy \[K = \frac12 mv^2\] looks completely different. It doesn’t seem to depend on the direction. How are these two concepts related?

Answer accepted (score 5)

When a force is applied over a certain distance, that force does mechanical work, \(W\). If the force is constant \(F\) and the object it is exerted on is moved by a distance \(\Delta x\), then \(W=F\Delta x\). If the force is not constant but a function of the position, this turns into an integral: \[W = \int_{x_1}^{x_2}F(x)\,\mathrm d x.\] If you don’t know calculus yet, just ignore this.

Note, that is is not important how long (in time) the force is exerted. E.g. a cup on a table will feel the constant force due to gravity but is won’t move (because the table is pushing it upwards with an equal but opposite force) so there is no work done on that cup, meaning that its energy content won’t change.

Work is basically just energy change. Depending on how the work is applied, it will increase (or decrease) a specific kind of energy. If the work leads to a change in the (absolute) velocity, it will modify the kinetic energy.

E.g. if a car accelerates from standstill with constant acceleration \(a\) (i.e. the engine will exert a constant forward force on the car), its velocity increases linearly in time, \(v(t)=vt\) and its position quadratically, \(x(t)=\frac{1}{2}at^2\). After a time \(t_1\), it went over a distance \(x_1=x(t_1)=\frac{1}{2}at_1^2\), the work done by the engine will be \(Fx_1 = max_1= \frac{1}{2}ma^2t_1^2\). At time \(t_1\), the velocity of the car is \(v_1=v(t_1)=at_1\), so we can write the work done by the engine as \(\frac{1}{2}mv_1^2\). This is exactly the amount of kinetic energy gained by the car. So the work done by the engine was used to increase the car’s kinetic energy.

Answer 2 (score 1)

It is really a problem of definition; the kinetic energy defines a useful quantity which by definition is a scalar not a vector. You don’t actually strictly speaking apply a kinetic energy to a body. A body carries a kinetic energy by the mere virtue of its speed and there is a difference between speed and velocity. This quantity Kinetic energy can be used in equations such as the mechanical energy conservation or the work-energy theorem etc. Velocity refers to a vector including information about magnitude and direction. Speed does not! Speed only carries info about magnitude and note here that the v in the kinetic energy formula refers only to speed.

Answer 3 (score -2)

The full equation goes as follows:

D = -(M/2K) ln ( (T - Mg - Kv^2)/(T - Mg) )

where: D = distance traveled, M = mass of the subject, g = acceleration due to gravity, v = initial speed/velocity, T = thrust, and K = drag factor (in units mass/distance).

If K = 0, then l’Hopital gives (remember to take the constant -M/2 outside the limit):

lim(K->0) D(K) = -(M/2) lim ( (T - Mg)/(T - Mg - Kv^2) ) (-v^2/(T - Mg) ) = (Mv^2/2)/(T - Mg)

That is, D = kinetic energy / net force (thrust minus gravity) when there’s no drag.

35: Linear acceleration vs angular acceleration equation (score 158890 in 2012)

Question

I’m learning about angular velocity, momentum, etc. and how all the equations are parallel to linear equations such as velocity or momentum. However, I’m having trouble comparing angular acceleration to linear acceleration.

Looking at each equation, they are not as similar as some of the other equations are:

  • Anglular acceleration = velocity squared / radius
  • Linear acceleration = force/ mass

I would think angular acceleration would take torque into consideration. How is Vsquared similar in relation to force, and how is radius’s relation to Vsquared match the relationship between mass and force?

angular velocity Angluar accel

I suppose the root of this misunderstanding is how I’m thinking of angular acceleration, which is only an vector representing an axis’s direction, and having a magnitude equal to the number of radians rotated per second.

I also am confused on what exactly ‘V’ (tangential velocity) represents and how it’s used. Is it a vector whose magnitude is equal to the number of radians any point on a polygon should rotate? What is the explanation?

Answer accepted (score 7)

You made a mistake in assuming that the angular acceleration (\(\alpha\)) is equal to \(v^2/r\) which actually is the centripetal acceleration. In simple words, angular acceleration is the rate of change of angular velocity, which further is the rate of change of the angle \(\theta\). This is very similar to how the linear acceleration is defined.

\[a=\frac{d^2x}{dt^2} \rightarrow \alpha=\frac{d^2\theta}{dt^2}\]

Like the linear acceleration is \(F/m\), the angular acceleration is indeed \(\tau/I\), \(\tau\) being the torque and I being moment of inertia (equivalent to mass).

I also am confused on what exactly ‘V’ (tangential velocity) represents and how it’s used. Is it a vector who’s magnitude is equal to the number of radians any point on a polygon should rotate?

The tangential velocity in case of a body moving with constant speed in a circle is same as its ordinary speed. The name comes from the fact that this speed is along the tangent to the circle (the path of motion for the body). Its magnitude is equal to the rate at which it moves along the circle. Geometrically you can show that \(v = r\omega\).

Answer 2 (score 6)

\(a_c = \frac{v^2}{r}\) isn’t angular acceleration. It’s the magnitude of the linear acceleration towards the centre of an object following a circular path at constant angular velocity. Angular acceleration is the derivative of angular velocity, and the analogue of Newton’s second law is that angular acceleration equals torque divided by moment of inertia.

Answer 3 (score 3)

Always start with the units. They’ll tell you a lot about the equations, and allow you to fix consistency errors. Incidentally, this is why I prefer Leibniz’s notation over Newton’s for derivatives, the units are immediately determined by examining the derivative, e.g. \(dx/dt\) has units of distance over time assuming the usual definition of \(x\) and \(t\).

In this case, the angle, \(\theta\), is the equivalent of distance traveled in linear kinematics, and it has units of radians (\({rad}\)). (Radians, being unit less, are to some extent a place holder, but place holders can be very useful, so keep them in mind.) So, then the rate of change of angle with respect to time, \(\omega\), has the units of \({rad}/s\). Angular acceleration, \(\alpha\), will then have units of \({rad}/s^2\).

With those in mind, you can immediately tell that \(a_c = \frac{v^2}{r}\) is not an angular acceleration, but a linear acceleration, as described by Peter. Similarly, angular acceleration is not directly related to force, but to torque, \(\tau = I \alpha\), where \(I\) is the moment of inertia. (From a mathematical perspective, the moment of inertia is the second moment of the mass distribution where the center of mass is the first moment.) Torque has the units \({kg}\ m^2/s^2\), where the radians were dropped. Note, it has units of energy, or \((Force)(distance)\), and \(\tau = r \times F\).

On any single parameter curve in \(\mathbb{R}^n\), \(n\geq2\), the derivative with respect to that parameter always lies tangent to the curve. The derivative is literally showing us how the position is going to change. From a physics perspective, you can think of this as attaching the velocity and acceleration vectors to the moving object, itself, as in drawing a free body diagram.

To be concrete, for uniform circular motion, the position is

\[r(t) = R( \cos(t) \hat{i} + \sin(t) \hat{j} )\]

where \(R\) is the radius of the circle, \(\hat{i}\) and \(\hat{j}\) are the unit vectors in the \(x\) and \(y\) directions, respectively, and the velocity is

\[v(t) = R( -\sin(t) \hat{i} + \cos(t) \hat{j} ).\]

Note, that the velocity is perpendicular to the position which is a property of circular motion. From this, you should be able to mathematically demonstrate that the acceleration is perpendicular to the velocity and anti-parallel to the position. I’ll leave you with the problem of understanding why this makes sense physically, also.

36: Why does a remote car key work when held to your head/body? (score 156320 in 2016)

Question

I was trying to unlock my car with a keyfob, but I was out of range. A friend of mine said that I have to hold the transmitter next to my head. It worked, so I tried the following later that day:

  • Walked away from the car until I was out of range
  • Put key next to my head (it worked)
  • Put key on my chest (it worked)
  • Put key on my leg (didn’t work)

So first I thought it has to do with height of the transmitter. But I am out of range if I use the key at the same height as my head but not right next to my head. Same applies when my key is at the same height as my chest. So it has nothing to do with height (as it appears).

Then I thought, my body is acting like an antenna, but how is that possible if I am holding the key? Why would it only amplify the signal if I hold it against my head and not if I simply hold it into my hand?

Here’s a vid of Top Gear demonstrating it.

Answer accepted (score 81)

This is a really interesting question. It turns out that your body is reasonably conductive (think salt water, more on that in the answer to this question), and that it can couple to RF sources capacitively. Referring to the Wikipedia article on keyless entry systems; they typically operate at an RF frequency of \(315\text{ MHz}\), the wavelength of which is about \(1\text{ m}\). Effective antennas (ignoring fractal antennas) typically have a length of \(\frac{\lambda}{2}=\frac{1}{2}\text{m}\approx1.5\text{ ft}\).

So, the effect is probably caused by one or more of the cavities in your body (maybe your head or chest cavity) acting as a resonance chamber for the RF signal from your wireless remote. For another example of how a resonance chamber can amplify waves think about the hollow area below the strings of a guitar. Without the hollow cavity the sound from the guitar would be almost imperceptible.

Edit: As elucidated in the comments, a cavity doesn’t necessarily need to be an empty space; just a bounded area which partially reflects electromagnetic waves at the boundaries. The area occupied by your brain satisfies these conditions.

Edit 2: As pointed out in the comments, a string instrument is significantly louder with just a sounding board behind the strings, so my analogy, though true, is a bit misleading.

Edit 3: As promised in the comments, I made some more careful measurements of the effect in question, using a number of different orientations of remote position and pointing. I’ve posted these as a separate answer to this question.

Answer 2 (score 36)

As promised in the comments to my answer, I went out and measured the effect in a number of different configurations (a couple of days later than promised :-)). For those of you who just want the conclusions, here they are:

The remote seems to work better when held to the head though the improvement isn’t as marked as one might have expected from a google search of the topic. The best possible orientation seems to be to hold the remote flat against your temple. If you aren’t willing to hold it to your head, pointing it at the vehicle seems to work better than pointing it up, and there doesn’t seem to be much dependence on how high you hold it. Finally, holding the remote to your chest is worse than just holding it at arm’s length.


The Experiment

I chose six different positions in which to hold the remote, and in each of those positions I held the remote in two different orientations (described in the list below). In each position/orientation I clicked the remote 3 times, waiting a few seconds between clicks. I recorded the number of times out of 3 that the car responded to my click.

The car, a 2009 Volkswagen GTI, was parked sideways. Temperature: 70.5\(^\circ\) F; Barometric Pressure: 29.75 inHg; Humidity 86%; Winds: ~5 mph. There were no large structures around accept for the concrete encased stainless steel vacuum tube of the LIGO Livingston Interferometer which runs parallel to the measurement axis and extends for kilometers in both directions. The battery in my remote is a bit old, but I tried to keep my clicks evenly spaced and began with several discarded clicks to try and cancel out battery effects.

The different orientations are documented in the picture below, but here is a description

  • Low (Foward/Up): Held down by my leg pointing the remote towards the vehicle or pointing it directly up into the sky.
  • Middle (Forward/Up): Held my arm extended to the right pointing the remote towards the vehicle or pointing it directly up into the sky.
  • High (Forward/Up): Held my arm high above my head pointing the remote towards the vehicle or pointing it directly up into the sky.
  • Chin (Pointed/Flat): Held against my chin either pointed up into my chin or flat against my chin.
  • Temple (Pointed/Flat): Held against my temple either pointed into the temple (like a salute) or held flat against my temple.
  • Chest (Pointed/Flat): Held at the center of my chest pointed towards my chest or held flat against it with the remote pointing up.

Different Orientations are Shown


The Results

In table form: enter image description here

and graphically: enter image description here

Answer 3 (score 9)

Remote “key fob” designers intentionally limit size so they conveniently fit in your pocket.

However, the convenience comes at a big price - the tiny loop antenna inside is extremely inefficient, transmitting less than 10% of the energy pumped into it, while the rest is simply converted into heat.

When holding your remote to your head, your arm, shoulder and head form a much larger “body loop” antenna which is almost 100 times more efficient than the remote’s antenna.

Then, just like in a transformer, the small single “winding” of the small loop magnetically couples with the larger, nearby single “winding” of your “body-loop”.

The magnetic coupling between these two antennas is not great, but it’s good enough to make the combination antenna around 2x to 3x better than the remote alone, resulting in a notable improvement in operating range.

37: Why does the moon sometimes appear giant and a orange red color near the horizon? (score 155518 in 2015)

Question

I’ve read various ideas about why the moon looks larger on the horizon. The most reasonable one in my opinion is that it is due to how our brain calculates (perceives) distance, with objects high above the horizon being generally further away than objects closer to the horizon.

But every once in a while, the moon looks absolutely huge and has a orange red color to it. Both the size and color diminish as it moves further above the horizon. This does not seem to fit in with the regular perceived size changes that I already mentioned.

So what is the name of this giant orange red effect and what causes it?

Answer accepted (score 27)

Harvest Moon (Source, Wikipedia Commons)

The moon is generally called a “Harvest Moon” when it appears that way (i.e. large and red) in autumn, amongst a few other names. There are other names that are associated with specific timeframes as well. The colour is due to atmospheric scattering (Also known as Rayleigh scattering):

may have noticed that they always occur when the Sun or Moon is close to the horizon. If you think about it, sunlight or moonlight must travel through the maximum amount of atmosphere to get to your eyes when the Sun or Moon is on the horizon (remember that that atmosphere is a sphere around the Earth). So, you expect more blue light to be scattered from Sunlight or Moonlight when the Sun or Moon is on the horizon than when it is, say, overhead; this makes the object look redder.

As to the size, that is commonly referred to as the “Moon Illusion”, which may be a combination of many factors. The most common explanation is that the frame of reference just tricks our brains. Also, if you look straight up, the perceived distance is much smaller to our brains than the distance to the horizon. We don’t perceive the sky to be a hemispherical bowl over us, but rather a much more shallow bowl. Just ask anyone to point to the halfway point between the horizon and zenith, and you will see that the angle tends to be closer to 30 degrees as opposed to the 45 it should be.

University of Wisconsin discussion on the Moon Illusion.

NASA discussion on moon illusion.

A graphical representation of this:

Optical Illusion illustrated

Dr. Phil Plait discusses the illusion in detail.

Answer 2 (score 23)

The two effects are not related.

The size appearing larger is a matter of some speculation to this day, but it is purely a psychological effect. If you want to prove this, take a look a the moon while standing up and looking between your legs. It won’t look nearly as large.

The red/orange color is related to the sunset being red. In fact, it’s the same thing exactly. The blue and green light has already been scattered, leaving only the red/orange light. This can be exacerbated by any of the things which cause sunsets to be more vivid, including pollution, clouds, dust, volcanic activity, etc.

Answer 3 (score 3)

It is an optical illusion. It only looks bigger near the horizon because it can more easily be compared to familiar objects on the ground. If you hold up a coin in front of your line of sight while looking at the moon and then compare your arm extension for a low moon and a high moon you see that they are the same. IOW, the diameters are the same.

Don’t have a link because I learned this from the late night PBS astronomy show Jack Horkheimer: Star Gazer years ago and I don’t think Jack would lie to us ;o)

Edit to answer the color question. At a low angle your line of sight is cutting through more atmosphere, so the “color saturation” goes up depending on what gases are in the atmosphere at that particular time and area.

38: Why does the road look like it’s wet on hot days? (score 149961 in 2018)

Question

Often, I’ll be driving down the road on a summer day, and as I look ahead toward the horizon, I notice that the road looks like there’s a puddle of water on it, or that it was somehow wet. Of course, as I get closer, the effect disappears.

I know that it is some kind of atmospheric effect. What is it called, and how does it work?

Answer accepted (score 28)

The phenomenon is called Mirage (EDIT: I called it Fata Morgana earlier, but a Fata Morgana is a special case of mirage that’s a bit more complex). The responsible effect is the dependence of the refractive index of air on the density of air, which, in turn, depends on the temperature of the air (hot air being less dense than cold air).

Mechanism Leading to a Fata Morgana

A non-constant density leads to refraction of light. If there’s a continuous gradient in the density, you get a bent curve (i) as opposed to light coming straight at you (d). Your eye does not know, of course, that the light (i) coming at it was bent, so your eye/brain continues the incoming light in a straight line (v).

This mirroring of the car (or other objects) then tricks you into thinking the road is wet, because a wet street would also lead to a reflection. In addition, the air wobbles (i.e. density fluctuations), causing the mirror image to wobble as well, which adds to the illusion of water.

Answer 2 (score 13)

Bend the light beam at the interface between a cold air mass with a mass of hot air as often seen in the mirage of the pavement.

Light Bend

As we look into the hot air region we see the light coming from the colder region, the sky, as metallic surface.

Highway_mirage

Physical explanation: The index of refraction of air exhibits a temperature dependence and makes the light rays bend more or less in the layer of separation of hot and cold layers.

Answer 3 (score 9)

It is a mirage: in particular it is caused by hot air near the road and less hot air above it creates a gradient in the refractive index of the air and so making a virtual image of the sky appear to be on or below the road. Air currents make this shimmer, similar to a reflection of the sky on water, hence causing the illusion of wetness.

39: What software programs are used to draw physics diagrams, and what are their relative merits? (score 148728 in 2016)

Question

Undoubtedly, people use a variety of programs to draw diagrams for physics, but I am not familiar with many of them. I usually hand-draw things in GIMP which is powerful in some regards, but it is time consuming to do things like draw circles or arrows because I make them from more primitive tools. It is also difficult to be precise.

I know some people use LaTeX, but I am not quite sure how versatile or easy it is. The only other tools I know are Microsoft Paint and the tools built into Microsoft Office.

So, which tools are commonly used by physicists? What are their good and bad points (features, ease of use, portability, etc.)?

I am looking for a tool with high flexibility and minimal learning curve/development time. While I would like to hand-draw and drag-and-drop pre-made shapes, I also want to specify the exact locations of curves and shapes with equations when I need better precision. Moreover, minimal programming functionality would be nice additional feature (i.e. the ability to run through a loop that draws a series of lines with a varying parameter).

Please recommend few pieces of softwares if they are good for different situations.

Answer accepted (score 45)

I’ve had good experiences with Inkscape. It has a GUI interface, but allows you to enter coordinates directly if you want, and it’s scriptable. There is a plug-in that allows you to enter LaTeX directly (for labels and such). The downside is that it is very much still in development, so sometimes you find that a feature you want is not completely implemented yet.

As an example, here is a poster I made last week, entirely within Inkscape: Poster made with Inkscape

Inkscape now also has the “JessyInk” plug-in which allows you to use it to make presentations (à la Powerpoint). The presentation can be viewed in a web browser as SVG, or exported to PDF.

If you have a Mac and don’t mind spending some money ($100), I’ve heard good things about OmniGraffle.

Answer 2 (score 41)

I’m learning TikZ (a drawing package for LaTeX) as we speak. It’s good for two-dimensional line drawings, the syntax for specifying shapes and curves is extremely versatile, but the learning curve is steeper than LaTeX even.

There is a superb gallery of TikZ examples.

Here is another collection of neat TikZ examples on SE.tex.

Answer 3 (score 33)

I’ll interpret your term diagram as “any fancy image that captures some physics”.

For this I can hardly recommend anything else then MetaPost. It’s on par with TeX in being a little hard to learn but once you do master the basics you won’t believe you could have ever used anything else (in particular, GIMP and Inkscape; good analogy here would be to TeX vs. MS Word).

Basic properties
  1. it’s a (simple) programming language
  2. it’s vectorial (this should probably go without saying but still)
  3. it’s primitives are things like points, lines, paths, splines
  4. it contains excellent image manipulation facilities; you can say things like “take this image, scale it up by two and rotate it by 60 degrees”
  5. you can insert TeX labels
  6. it can solve equations; This is a real killer that no one else offers. You can draw two curves \(X(t)\) and \(Y(t)\) (defined most comfortably as splines) and tell MP to compute their intersection, draw a point there and label it with some text
Success story

I used MP to create some polygons on a hexagonal lattice in the context of cluster expansions. There was a huge number of those polygons to draw, so I quickly abandoned all hope of trying to draw them by hand in Inkscape or something similar. True, it would probably be quicker in the end, but I hate manual work; I rather spend much longer learning some programming language and then just code all the work in few minutes. So I put together simple MP program that has converted my input data (vertices and edges of the polygons as just numbers) into beautiful images. For a one night’s work and my first time with MP I was more than satisfied.

Goodies

alt text

Credit for this amazing picture goes to Johan Kåhrström (go also see more stuff under illustrations there).

40: What exactly is the difference between radiation, conduction, and convection? (score 148048 in 2016)

Question

Okay, so everywhere I’ve read, I hear the main difference is the requirement of a medium. But for example, if you take the case of heat ‘radiating’ from a red-hot iron, isn’t that actually convection and not radiation? I mean, isn’t the temperature difference between the surrounding air and the iron causing the air to gain heat?

Answer accepted (score 19)

To pretty much everything you stated in your question, “no”.

That convection requires a medium is not the main difference, it is simply the most obvious aspect of what is a fundamentally different mechanism for transferring energy. Convection is the transfer of energy by movement of a medium, whereas radiation is the transfer of energy by, well, thermal radiation. Conduction also requires a medium, but, again, it is a fundamentally different mechanism than either convection or radiation; in this case it is the transfer of energy through a medium.

Unfortunately, analogies are hard but if you can visualize the particles involved, it would help. Picture the red hot iron you mentioned. On a molecular level, the material is emitting lots and lots of photons (hence why it is glowing red). The creation of these photons takes energy; energy from the heat of the iron. These photons leave the iron, pass through the environment, and eventually collide with some other object where they are absorbed and deposit their energy. This is radiative heat transfer. If that energy is deposited on your retina or a CCD (like in a digital camera), an image forms over time. This is how infrared goggles work and they would work equally well in high vacuum as here on earth.

In conduction, the next simplest example, there is no generation of photons (physics nerds forgive me for the sake of simplicity). The individual atoms in the object are vibrating with heat energy. As each atom gains energy from its more energetic neighbors, so it gives up energy to its less energetic ones. Over time, the heat “travels” through the object.

In convection, the molecules of gas near the object gain energy, like in the conduction case, but those same molecules that gained energy then travel through the environment to some other location where they then give off their heat energy.

In summary:

  • radiation = generated and absorbed photons
  • conduction = molecules exciting their neighbors successively
  • convection = molecules heated like in conduction, but then move to another location

Answer 2 (score 13)

I will try to explain in simple words.

Every body which has a temperature above 0 Kelvin gives out (ie. radiates) some heat in the form of waves. (So, even we radiate!) Of course, the amount of this radiation depends on the temperature, so the more the temperature of the body, the more heat it gives out. Now, since this heat energy travels in the form of waves, it does not necessarily require any medium to travel. So, it can travel in any/no medium.

Sun heating Earth

I will try explaining convection using a simple example.

Consider a beaker of water being heated from the bottom. The water in the lower region gets heated up, becomes lighter in weight, and hence comes to the top. Now the (relatively) cooler region of water on the top comes down and begins to heat up. Now, again it gets heated up, and moves up when it becomes lighter than the (previously heated) water on the top, but this time getting more heated that the water on the top. This process continues, and eventually every molecule of water gets heated up.

As you can see, in this process, the motion of the particles lead to the heating of the whole body (water, in this case) - the warmer ones moved away from the source of heat to let the cooler ones collect the heat.

Convection in pot of water

So, its clear that convection requires a medium (specifically, a non-solid medium). Unlike radiation, if there is no medium near the source, it cannot loose its heat simply using convection. (It can of course loose it via radiation.)

Lets comes to your heating of iron rod case.

Glowing hot iron

What you said is partly correct, that convection is one of the modes of heat transfer here. But, so is radiation. Remember, that the iron rod is too hot as compared to the surrounding temperature and so it will radiate a lot of heat. In fact, the heat is so much that the rod glows bright red. (If you know a little about the EM spectrum, you would know that when the emission from a body also includes the visible spectrum, we can actually see (a part of) the emission spectrum.)

In reality, all 3 modes of heat transfer occur simultaneously.

Three modes of heat transfer

(Even in the above beaker example, the water molecules, (along with convection) give out heat in the form of radiation as well, as they have a non-zero temperature. They also transfer heat by collisions to other water molecules, which is known as conduction. However, in that example, convection was the most dominant mode of heat transfer.)

If you wish to know the exact difference between these modes of transfer, you would perhaps need to take up intermediate-level engineering course.

Answer 3 (score 2)

No. Light (as you can see — it’s red hot! — and infrared light which you cannot see) leaves the metal surface and reaches your skin/thermometer directly, and would do so without air.

41: Solving for initial velocity required to launch a projectile to a given destination at a different height (score 146649 in 2012)

Question

I need to calculate the initial velocity required to launch a projectile at a given angle from point A to point B. The only force acting on the projectile after launch will be gravity – zero air resistance. The projectile is launched within a simulated, virtual environment; however, I am asking for help with the physics rather than the simulation itself.

I have had success (with a different equation) when point A and point B are at the same height; however, once point B is at a different height, my calculations become less precise – well, wrong in fact.

I researched the following formula for finding the range of a projectile on uneven ground. The parameters are all available to me, except the initial velocity that I need to solve for.

\(d\) : range or distance
\(v_i\) : initial velocity
\(g\) : gravity
\(\theta\) : launch angle
\(y_0\) : launch height

\[d = \frac{v_i \cos\theta}{g}(v{_i} \sin \theta + \sqrt{(v_i \sin\theta)^2 + 2gy_0})\]

I attempted to solve for initial velocity (eq: A):

\[v_i = \sqrt{\frac{d^2g}{2\cos\theta^2(y_0+d \tan\theta)}}\]

Using this equation in my simulation I apply the velocity to a normalised displacement vector and launch the projectile. It gets close to its target but:

  • \(x_{final}\) is always correct
  • \(z_{final}\) is always incorrect - it is close to the desired \(z\) but always offset by a seemingly proportional amount.

I have spent many hours trying to review the equation but I have been unsuccessful. Any advice would be most appreciated:

  • is this the correct equation?
  • have I solved for \(v_i\) correctly? I have looked for an example equation online but I have not found one structured how I need it, i.e. solving for \(v_i\)
  • I have spent hours researching online - perhaps I have missed some good (but entry-level) resources. Do you know of any reference material that may help me?

Update - I have now named the equation above for referencing below

Following on from the answer by @Pygmalion, which I am still gratefully working to understand:

I agree that your derived equation is equivalent to mine (A). Using yours (@Pygmalion’s) in my simulation, I therefore observe the same failures: the projectile always lands short of the target.

I have simplified the simulation keeping the launch height and target height the same. I still encounter the same problem with equation A; however, when passing the same parameters to the following equation the projectile always hits the target precisely (eq: B):

\[v_i = \sqrt{\frac{dg}{\sin2\theta}}\]

Given the success of equation B when the launch and target heights are the same and the failure of equation A given the same conditions, I question whether equation A is the correct one to solve the problem since I would expect it to work for any combination of launch|target height.

I think the help I need is around why equation B works. Does it cater for conditions that equation A overlooks? If equation A is unsuitable, are you able to recommend an alternative?

In short, the \(v_i\) calculated by equation A (in my simulations) is always less than \(v_i\) from equation B given the same inputs - the projectile therefore always falls short of its target.

Answer accepted (score 6)

This types of problems are solved by observing projectile movements in \(x\) and \(y\) direction separately. In \(x\) direction you have constant velocity movement

\[v_x = v_{x0} = v_0 \cos(\theta), \; (1)\]

\[x = v_{x0} t +x_0 = v_0 \cos(\theta) \; t +x_0, \; (2)\]

and in \(y\) direction you have constant acceleration movement with negative acceleration \(-g\)

\[v_y = - g t + v_{y0} = - g t + v_0 \sin(\theta), \; (3)\]

\[y = - \frac{1}{2} g t^2 + v_{y0} t + y_0 = - \frac{1}{2} g t^2 + v_0 \sin(\theta) \; t + y_0. \; (4)\]

Your initial conditions are

\[x_0 = 0, \; y_0 \ne 0,\]

and final conditions (at moment \(t=T\) projectile falls back on the ground) are

\[t = T, \; x = d, \; y = 0.\]

If you put initial and final conditions into equations (2) and (4) you end up with two equations and two unknowns \(v_0, T\). By eliminating \(T\) you get expression for \(v_0\).

My calculations show that

\[v_0 = \frac{1}{\cos(\theta)}\sqrt{\frac{\frac{1}{2} g d^2}{d \tan(\theta)+y_0}}\]

which is I believe equal to your equation. Maybe your problem is that \(d\) means displacement in direction \(x\), while the total displacement is \(\sqrt{d^2+y_0^2}\)?

42: What is a good introductory book on quantum mechanics? (score 145588 in 2013)

Question

I’m really interested in quantum theory and would like to learn all that I can about it. I’ve followed a few tutorials and read a few books but none satisfied me completely. I’m looking for introductions for beginners which do not depend heavily on linear algebra or calculus, or which provide a soft introduction for the requisite mathematics as they go along.

What are good introductory guides to QM along these lines?

Answer accepted (score 60)

Introduction to Quantum Mechanics by David Griffiths, any day! Just pick up this book once and try reading it. Since you have no prior background, this is the book to start with. It is aimed at students who have a solid background in basic calculus, but assumes very little background material besides it: A lot of linear algebra is introduced in an essentially self-contained way.

Furthermore, it contains all the essential basic material and examples such as the harmonic oscillator, hydrogen atom, etc. The second half of the book is dedicated to perturbation theory. For freshmen or second-year students this a pretty good place to start learning about QM, although some of the other answers to this question suggest books that go a bit further, or proceed at a more rigorous level.

Answer 2 (score 32)

For quantum mechanics, the original is still the best:

  • Dirac’s “The Principles of Quantum Mechanics”.

It’s clear, it’s terse, and it’s comprehensive. All other books take most of their material from this source.

For a basic short introduction to quantum mechanics, you can’t beat:

This is very good and intuitive, and complementary to the remaining books.

  • Landau and Lifschitz “Quantum Mechanics”

is heavy on good exercizes and mathematical tools. L&L include topics not covered everywhere else. The standard undergraduate books on quantum mechanics are not very good in comparison to these, and should not be used.

A book which requires minimum of calculus or continuous mathematics is

  • Nielsen & Chuang: “Quantum Computation and Quantum Information”

This focuses on modern research, and discrete systems in quantum computation. If you don’t know calculus, learn it, but you might find this book the most accessible. It’s long though.

On advanced quantum mechanics, there are good books are by Gottfried and by Sakurai. Berezin’s book is also a great classic.

For the path integral, you can read Feynman and Hibbs, but I like Feynman’s 1948 Reviews of Modern Physics article more. There is also a good book which covers the path integral:

  • Yourgrau & Mandelstam: Variational Principles in Classical and Quantum Physics.

The original source for the Fermionic path integral is still the best, in my opinion:

If you want to convince youself quantum mechanics is necessary, you should recapitulate the historical development. For this, the following source is good:

  • Ter Haar’s “The Old Quantum Theory” (it’s short) to learn Bohr Sommerfeld quantization

You can also read the Wikipedia page on old quantum theory for a sketchy summary, then look at the page on matrix mechanics. This explains the intuition Heisenberg had about matrix elements, something which is not in Dirac’s book or anywhere else. Heisenberg’s reasoning is also found to certain extent in the first chapters of this book:

  • Connes “Noncommutative geometry”.

This book is also very interesting for other reasons.

Answer 3 (score 17)

OK. First, you need a some comfort in Linear Algebra. Go to the MIT Open Courseware site and watch the Linear Algebra lecture (videos) by Strang. These are great.

Next, watch the “Theorectical Minimum” videos by Leonard Susskind . They represent the theoretical minimum that you need to know about quantum mechanics. (i.e. the title of the video course is theoretical minimum, but it is in fact a course on quantum mechanics. Susskind is a great teacher and the videos are great. You can access them on itunes and You Tube. Search for Susskind lectures quantum mechanic from Stanford. They are just released (a few weeks ago)

Finally, the text you want is Principles of Quantum Mechanics by Shankar. He is also a great teacher. He does have some video lectures on general physics, but he does not have a video lecture on Quantum Mechanics. Nonetheless, his book is a great book for learning. It is about $70, but if you google around (with PDF in your google search) you may get lucky.

43: Would time freeze if you could travel at the speed of light? (score 144599 in 2014)

Question

I read with interest about Einstein’s Theory of Relativity and his proposition about the speed of light being the universal speed limit.

  1. So, if I were to travel in a spacecraft at (practically) the speed of light, would I freeze and stop moving?

  2. Would the universe around me freeze and stop moving?

  3. Who would the time stop for?

Answer accepted (score 21)

This kind of question has a long and honorable history. As a young student, Einstein tried to imagine what an electromagnetic wave would look like from the point of view of a motorcyclist riding alongside it. But we now know, thanks to Einstein himself, that it really doesn’t make sense to talk about such observers.

The most straightforward argument is based on the positivist idea that concepts only mean something if you can define how to measure them operationally. If we accept this philosophical stance (which is by no means compatible with every concept we ever discuss in physics), then we need to be able to physically realize this frame in terms of an observer and measuring devices. But we can’t. It would take an infinite amount of energy to accelerate Einstein and his motorcycle to the speed of light.

Since arguments from positivism can often kill off perfectly interesting and reasonable concepts, we might ask whether there are other reasons not to allow such frames. There are. One of the most basic geometrical ideas is intersection. In relativity, we expect that even if different observers disagree about many things, they agree about intersections of world-lines. Either the particles collided or they didn’t. The arrow either hit the bull’s-eye or it didn’t. So although general relativity is far more permissive than Newtonian mechanics about changes of coordinates, there is a restriction that they should be smooth, one-to-one functions. If there was something like a Lorentz transformation for v=c, it wouldn’t be one-to-one, so it wouldn’t be mathematically compatible with the structure of relativity. (An easy way to see that it can’t be one-to-one is that the length contraction would reduce a finite distance to a point.)

What if a system of interacting, massless particles was conscious, and could make observations? The argument given in the preceding paragraph proves that this isn’t possible, but let’s be more explicit. There are two possibilities. The velocity V of the system’s center of mass either moves at c, or it doesn’t. If V=c, then all the particles are moving along parallel lines, and therefore they aren’t interacting, can’t perform computations, and can’t be conscious. (This is also consistent with the fact that the proper time s of a particle moving at c is constant, ds=0.) If V is less than c, then the observer’s frame of reference isn’t moving at c. Either way, we don’t get an observer moving at c.

Answer 2 (score 30)

This kind of question has a long and honorable history. As a young student, Einstein tried to imagine what an electromagnetic wave would look like from the point of view of a motorcyclist riding alongside it. But we now know, thanks to Einstein himself, that it really doesn’t make sense to talk about such observers.

The most straightforward argument is based on the positivist idea that concepts only mean something if you can define how to measure them operationally. If we accept this philosophical stance (which is by no means compatible with every concept we ever discuss in physics), then we need to be able to physically realize this frame in terms of an observer and measuring devices. But we can’t. It would take an infinite amount of energy to accelerate Einstein and his motorcycle to the speed of light.

Since arguments from positivism can often kill off perfectly interesting and reasonable concepts, we might ask whether there are other reasons not to allow such frames. There are. One of the most basic geometrical ideas is intersection. In relativity, we expect that even if different observers disagree about many things, they agree about intersections of world-lines. Either the particles collided or they didn’t. The arrow either hit the bull’s-eye or it didn’t. So although general relativity is far more permissive than Newtonian mechanics about changes of coordinates, there is a restriction that they should be smooth, one-to-one functions. If there was something like a Lorentz transformation for v=c, it wouldn’t be one-to-one, so it wouldn’t be mathematically compatible with the structure of relativity. (An easy way to see that it can’t be one-to-one is that the length contraction would reduce a finite distance to a point.)

What if a system of interacting, massless particles was conscious, and could make observations? The argument given in the preceding paragraph proves that this isn’t possible, but let’s be more explicit. There are two possibilities. The velocity V of the system’s center of mass either moves at c, or it doesn’t. If V=c, then all the particles are moving along parallel lines, and therefore they aren’t interacting, can’t perform computations, and can’t be conscious. (This is also consistent with the fact that the proper time s of a particle moving at c is constant, ds=0.) If V is less than c, then the observer’s frame of reference isn’t moving at c. Either way, we don’t get an observer moving at c.

Answer 3 (score 21)

Yes, I agree with David. If somehow, you were able to travel at the speed of light, it would seem that ‘your time’ would not have progressed in comparison to your reference time once you returned to ‘normal’ speeds. This can be modeled by the Lorentz time dilation equation:

\[T=\frac{T_0}{\sqrt{1 - (v^2 / c^2)}}\]

When traveling at the speed of light (\(v=c\)), left under the radical you would have 0. This answer would be undefined or infinity if you will (let’s go with infinity). The reference time (\(T_0\)) divided by zero would be infinity; therefore, you could infer that time is ‘frozen’ to an object traveling at the speed of light.

44: Why is light called an ‘electromagnetic wave’ if it’s neither electric nor magnetic? (score 143647 in 2012)

Question

How can light be called electromagnetic if it doesn’t appear to be electric nor magnetic?

If I go out to the sunlight, magnets aren’t affected (or don’t seem to be). And there is no transfer of electric charge/electrons (as there is in AC/DC current in space).

In particular, the photons (which light is supposed to be composed of) have no electric charge (nor do they have magnetic charge).

I’m looking for an explanation that can be appreciated by the average non-physicist Joe.

Answer accepted (score 62)

Light is an oscillating electric and magnetic field, so it is electrical and magnetic.

Later: re the edit to your question, I think there are two issues. Firstly the interaction with electric charge and secondly the interaction with magnets.

Light does not carry any charge itself, so it does not attract or repel charged particles like electrons. Instead light is an oscillating electric and magnetic field. If you take an electron and put it in a static electric field (e.g. around a Van de Graaff Generator) then the electron feels a force due to the field and will move. This happens when an electron interacts with a light wave, but because the light wave is an oscillating field the electron moves to and fro and there is no net motion. If you could watch an electron as light passes by you’d see it start oscillating to and fro, but it’s net position wouldn’t change.

This is exactly what happens in your TV aerial. The light (i.e. radio frequency EM) causes electrons in the TV aerial to oscillate and this oscillation generates an oscillating electric current. The voltage this generates is amplified by your TV. At the TV transmitter the same happens in reverse: an oscillating voltage is applied to the TV transmitter, the electrons oscillate in response and the oscillation generates an electromagnetic wave. So the process is oscillating electrons -> light -> oscillating electrons.

I’m not entirely sure what you mean by there is no transfer of electric charge/electrons (as there is in AC/DC current in space). If the above doesn’t satisfactorily explain what’s going on maybe you could expand on your question.

And finally on to the interaction with magnets.

The big difference between electric and magnetic fields is that (as far as we know) there are no isolated magnetic charges. If there were isolated magnetic charges e.g. if you could watch a magnetic monopole as a light wave passed by then you’d see similar behaviour to an electron. But there aren’t, so you don’t.

Answer 2 (score 25)

How can light be called electromagnetic if it doesn’t appear to be electric nor magnetic??

But light does appear to be electric and magnetic in nature. For example:

Photovoltaic effect:

The photovoltaic effect is the creation of voltage or electric current in a material upon exposure to light.

Focus: Measuring the Magnetism of Light

Now two groups have independently demonstrated that a tiny, metallic probe will interact strongly with the magnetic field of light waves trapped in a sort of semiconductor “box.”

Answer 3 (score 16)

How can light be called electromagnetic if it doesn’t appear to be electric nor magnetic?

According to the theory of Electricity and Magnetism, charged particles which are stationary are “electric”, charged particles which move at a constant velocity are “magnetic”, and charged particles which accelerates will emit “electro-magnetic radiation” which travels at the speed of light.

Charged particles can’t interact instantaneously, but rather there is a field of energy which mediates their interaction. This field of energy is what we call "the electromagnetic field’.

In other words, “light” is the transportation of energy from one part of the electromagnetic field to another, and it facilitates the interaction between electric and magnetic objects, but is neither electric nor magnetic itself.

45: Why do grapes in a microwave oven produce plasma? (score 140595 in 2017)

Question

Some of you may know this experiment (Grape + Microwave oven = Plasma video link):

  • take a grape that you almost split in two parts, letting just a tiny piece of skin making a link between each half-part.
  • put that in a microwave oven, and few seconds later, a light ball which seems to be a plasma appears upon that grape

Looking through google to understand why this phenomenon happens, I have found either laconic or partial answers to that question.

In broad strokes, this what I understand :

  1. Microwaves seem to create an electric current in the grape because of ions.

    • Why do grapes contain ions ?
  2. Suddenly the tiny link between the two half-parts is broken which creates an electric arc

    • How is that link broken ?
  3. In parallel, the grape is warmed up and a gas is released from the grape

    • What is this gas made of? Water? Sugar?
  4. The combination of the electric arc in that gas creates a plasma

    • What is the necessary condition for a gas crossed by an electric arc to create plasma?

Is that correct?

Are there any relevant parameters (microwave frequency, grape size, grape orientation) that make it work?

Any idea of the order of magnitude for the intensity involved, the voltage of such an arc, the temperature reached (I’ve read 3000 degrees!)?

Has someone a complete explanation to provide (reference to physical principle would be appreciated!)?

Answer accepted (score 28)

There does seem to be a lot of mythology around about the “grape in a microwave” experiment. I have never see any publications on the subject in a respectable journal, however from chatting to other scientists there seems to be a consensus about what happens.

It’s all rather boring really. The grape is the right size (about a quarter wavelength) and shape to act as an antenna that focusses the power in the middle. The skin joining the grape halves heats up, vapourises and bursts into flame.

If anyone feels in an experimental mood some obvious tests of this would be to change the grape size and shape, and see if that affects the flame. Less easy to do at home would be to try the experiment with a nitrogen atmosphere as that should prevent combustion.

Answer 2 (score 4)

As of early 2019 a research paper has been published to explain this phenomenon as due to resonance of the microwaves:

H.K. Khattak, P. Bianucci, and A.D. Slepkov, “Linking plasma formation in grapes to microwave resonances of aqueous dimers,” Proc. Natl. Acad. Sci. USA, vol. 116, no. 10, pp. 4000-4005, Mar 5, 2019. DOI: 10.1073/pnas.1818350116

Here I quote an excerpt of its abstract:

By expanding this phenomenon to whole spherical dimers of various grape-sized fruit and hydrogel water beads, we demonstrate that the formation of plasma is due to electromagnetic hotspots arising from the cooperative interaction of Mie resonances in the individual spheres. The large dielectric constant of water at the relevant gigahertz frequencies can be used to form systems that mimic surface plasmon resonances that are typically reserved for nanoscale metallic objects. The absorptive properties of water furthermore act to homogenize higher-mode profiles and to preferentially select evanescent field concentrations such as the axial hotspot.

And an excerpt of its summary:

[W]e have shown that the popular-science phenomenon of forming plasma with grapes in a household microwave oven is explained by MDR behavior. Grapes act as spheres of water, which, due to their large index of refraction and small absorptivity, form leaky resonators at 2.4 GHz. Mie resonances in isolated spheres coherently add when brought together such that the aqueous dimer displays an intense hotspot at the point of contact that is sufficient to field-ionize available sodium and potassium ions, igniting a plasma.

Answer 3 (score 4)

As of early 2019 a research paper has been published to explain this phenomenon as due to resonance of the microwaves:

H.K. Khattak, P. Bianucci, and A.D. Slepkov, “Linking plasma formation in grapes to microwave resonances of aqueous dimers,” Proc. Natl. Acad. Sci. USA, vol. 116, no. 10, pp. 4000-4005, Mar 5, 2019. DOI: 10.1073/pnas.1818350116

Here I quote an excerpt of its abstract:

By expanding this phenomenon to whole spherical dimers of various grape-sized fruit and hydrogel water beads, we demonstrate that the formation of plasma is due to electromagnetic hotspots arising from the cooperative interaction of Mie resonances in the individual spheres. The large dielectric constant of water at the relevant gigahertz frequencies can be used to form systems that mimic surface plasmon resonances that are typically reserved for nanoscale metallic objects. The absorptive properties of water furthermore act to homogenize higher-mode profiles and to preferentially select evanescent field concentrations such as the axial hotspot.

And an excerpt of its summary:

[W]e have shown that the popular-science phenomenon of forming plasma with grapes in a household microwave oven is explained by MDR behavior. Grapes act as spheres of water, which, due to their large index of refraction and small absorptivity, form leaky resonators at 2.4 GHz. Mie resonances in isolated spheres coherently add when brought together such that the aqueous dimer displays an intense hotspot at the point of contact that is sufficient to field-ionize available sodium and potassium ions, igniting a plasma.

46: How can a salt solution conduct electrical current? (score 140241 in 2018)

Question

How does a sodium chloride solution conduct electricity?

I know that sodium chloride dissociates into sodium and chloride ions in water, so when a voltage is applied, those ions can move. However, if I have two poles of metal placed in a beaker and put a voltage between them, the current flowing in the metal is made of electrons, while the current flowing in the water is made of ions. How can one turn into the other?

Answer accepted (score 13)

The sodium and chloride ions actually separate in water, turning solid NaCl into Na+ and Cl- ions that can move freely through the solution. Electrons are one form of charge carriers and the most common, being that they have a net negative charge and are mobile inside of metals but free ions moving around in a solution also constitutes a current.

EDIT in response to comment: When you put two metal poles into a solution (a negative anode and a positive cathode) and turn on a battery, you are making a voltage difference between the two rods. As you may know from circuits, voltage differences are what drive currents but how this is done is what separates electrons conducting current through a wire and the ionic salt water solution. I will separate this process into numbered steps, since I got very tangled up when I was trying to think of all the mechanisms at once.

Step 1. The battery is turned on and creates a voltage difference across the electrodes. Nothing is conducting at this point and no current is flowing.

This is a tricky part that I would appreciate an answer to if someone more knowledgeable is reading this but I believe this is correct.

Electrons will accumulate on the anode, giving that rod a negative net charge and the other rod a positive net charge.

Step 2. This voltage difference (and possible excess charge accumulation) sets up an electric field in the solution. This attracts the positive Na+ ions to the negative anode, as the positive cathode attracts the negative Cl- ions. Below is a picture I found in a PDF titled ‘Electrical Conduction in Solutions’ that illustrates this nicely.

A picture taken from the PDF 'Electrical Conduction in Solutions', URL: eee.uci.edu/programs/gchem/C05MANElectricalConduction.pdf

This is where the picture gets complicated. Simple attraction of ions is not enough to sustain a current through a solution; if nothing else were occurring in the solution besides the Na+ going to the anode and Cl- to the cathode, then once all the ions reach their respective electrodes nothing else in the solution would move.

So there is something fishy going on in the ‘simple’ salt water battery. Naive chemistry tells us this is the reaction that occurs when salt is dissolved in water:

NaCl(s) + H20(l) → Na+(aq) + Cl-(aq) + H20(l)

A hint on what is happening comes from the unexpected sector of chlorine production. Most of the chlorine used in the world is made using the following industrial process of purifying salt water:

2 NaCl(s) + 2 H2O(l) → Cl2(g) + H2(g) + 2 NaOH(aq)

You may have noticed this yourself when one actually sets up a salt water battery: gas bubbles accumulate on both electrodes and nothing precipitates out.

Step 3. You may know that electrolysis is the process by which a current is used to drive an otherwise non-spontaneous chemical reaction. The non-spontaneous reactions that the battery drives are so-called ‘redox’ reactions, where chemical species lose or gain electrons.

In this case, the negative accumulation of electrons at the anode provides the excess electrons needed to decomposes H20 into OH- and H+:

Anode (reduction): 2 H2O(l) + 2e- → H2(g) + 2 OH-(aq)

These hydroxide ions are continuously made near the anode and the hydrogen gas bubbles out, so two of three species from the brine equation are accounted for.

Step 4a. Around the anode, we now have a concentration of negative ions (OH-) by a negative electrode. These negative ions are subject to the electric field in the solution and are repelled away from the anode and attracted to the cathode, causing OH- to migrate the cathode.

In pure water, this would be the complete picture. Water would pick up an electron at the anode, decompose into hydroxide, which would migrate to the cathode, pick up an electron and turn back into H20 and electrons would be ferried across the solution.However, the reason that pure water alone is a poor conductor is that the diffusion of OH- across the electrode gap is very slow and makes for weak conduction. This is why we need to add a source of ions, such as NaCl, to get good conduction.

Step 4b. When NaCl is added to water, it is the Cl- ions are the ones that actually reach the cathode and react to deposit their electrons:

Cathode (oxidation): 2 Cl-(aq) → Cl2(g) + 2e-

This accounts for the final species we were missing and also completed the cycle. H20 picks up electrons at the anode and the OH- atoms carry it to the cathode. At the same time, the Cl- ions that dissociated in the water move towards the cathode and deposit electrons to become a gas. Thus the net movement of electrons from anode to cathode is complete and a current can flow.

I hope this is reasonably clean and clear after my edit. Also, I am a physics student, not a chemistry student so I welcome anyone pointing out errors or missing subtleties in my explanation.

Answer 2 (score 1)

You know that salts (an acid-base product, unlike metals) dissociate into ions when dissolved in solution. Isn’t it? Well, we’ve got the well-known Arrhenius theory for that. Lets take your example: \(\text {NaCl}\). Aqueous solution of this salt gives \(\text {Na}^+\) and \(\text {Cl}^-\) in solutions.

When you apply an external electric field (Electrolysis), we obtain current. Current is just an effect of the collision of charge carriers (assume they’re electrons in case of metals, and ions in case of solutions). The applied electric field exerts a force on the charges, causing them to move towards the attracting poles (the electrodes). The ions suffer more or less the same drift velocity \(v_d\) in the solution as that of the electrons in the metal. This is observed as current.

This case is slightly similar to that of a semiconductor (free electrons & holes are the charge-carriers). The direction of current is the same for cations, and opposite in the case of anions.

The overall principle still remains the same (I think I’m repeating the same phrase) Current is an effect of the motion and collision of charge carriers. I can’t understand why it causes so much difficulty. Free electrons are the charge carriers in metals, while ions are the charge carriers in solutions. Full stop.


Of course, the metals won’t accept ions to pass through them. But, the ions transfer their charges (they’re just charge carriers and so, they can do such things) with the help of electrons. Say, \(\text {Na}^+\) takes an electron from the cathode and this electron is transferred to the nearby \(\text {Na}^+\) ions. In this way, a net current is observed. And for \(\text {Cl}^-\), its the other way round (in the opposite direction).

47: Why does the gas get cold when I spray it? (score 139974 in 2012)

Question

When you spray gas from a compressed spray, the gas gets very cold, even though, the compressed spray is in the room temperature.

I think, when it goes from high pressure to lower one, it gets cold, right? but what is the reason behind that literally?

Answer accepted (score 21)

The temperature of the gas that is sprayed goes down because it adiabatically expands. This is simply because there is no heat transferred to or from the gas as it is sprayed, for the process is too fast. (See this Wikipedia article for more details on adiabatic processes.)

The mathematical explanation goes as follows: let the volume of the gas in the container be \(V_i\), and its temperature \(T_i\). After the gas is sprayed it occupies volume \(V_f\) and has temperature \(T_f\). In an adiabatic process \(TV^{\,\gamma-1}=\text{constant}\) (\(\gamma\) is a number bigger than one), and so \[ T_iV_i^{\,\gamma-1}=T_fV_f^{\,\gamma-1}, \] or \[ T_f=T_i\left(\frac{V_i}{V_f}\right)^{\gamma-1}. \] Since \(\gamma&gt;1\) and, clearly, \(V_f&gt;V_i\) (the volume available to the gas after it’s sprayed is much bigger than the one in the container), we get that \(T_f&lt;T_i\), i.e. the gas cools down when it’s sprayed.

By the way, adiabatic expansion is the reason why you are able to blow both hot and cold air from your mouth. When you want to blow hot air you open your mouth wide, but when you want to blow cold air you tighten your lips and force the air through a small hole. That way the air goes from a small volume to the big volume around you, and cools down according to the equations above.

Answer 2 (score 14)

This is a very confused discussion. Gas being forced through a nozzle, after which it has a lower pressure, is an irreversible process in which the entropy increases. This has nothing to do with adiabatic expansion. It has everything to do with the Joule-Thomson effect, which is discussed in this Wikipedia article.

The change in temperature following the drop in pressure behind the nozzle is proportional to the Joule-Thomson coefficient, which can be related to the (isobaric) heat capacity of the gas, its thermal expansion coefficient, and its temperature. This is a famous standard example in thermodynamics for deriving a nontrivial thermodynamic relation by using Maxwell relations, Jacobians, and whatnot. Interestingly, it is not certain that the temperature drops. For an ideal gas – which seems to be the only example discussed so far in this thread – it wouldn’t, because the Joule-Thomson coefficient exactly vanishes. This is because the cooling results from the work which the gas does against its internal van der Waals cohesive forces, and there are no such forces in an ideal gas.

For a real gas cooling can happen, but only below the inversion temperature. For instance, the inversion temperature of oxygen is about \(1040\) \(K\), much higher than room temperature, so the JT expansion of oxygen will cool it. \(\text{CO}_2\) has an even higher inversion temperature (about \(2050\) \(K\)), so \(\text{CO}_2\) fire extinguishers, which really just spray \(\text{CO}_2\), end up spraying something that is very cold. Hydrogen, on the other hand, has an inversion temperature of about \(220\) \(K\), much smaller than room temperature, so the JT expansion of hydrogen actually increase its temperature.

Answer 3 (score 13)

your question is about large drops in pressure, and why they cool gasses. The answer is that the gas is doing work in the process of expanding, and this work releases energy to the environment.

If you prevent the gas from doing work, if there is nothing for it to push against, it doesn’t get cold. If you have a dilute gas in the corner of a room and you open a barrier to a vacuum, the gas expands into the vacuum with no change in temperature. This is not what you are doing when you spray the can into air. There, the gas is encountering air, and produces a pressure wall which it then pushes against doing work. Once the equilibrium spray-profile is established, there is a pressure gradient from the can outward that accelerates the spray to its final velocity. Travelling along this pressure gradient, the gas expands and does work, and this removes energy from the gas. The cold temperature profile sneaks back towards the can, because the air is such a lousy conductor of heat, so the heat is all coming from the can. Eventually, your hand gets cold.

48: Relation between Electric field and potential (score 139093 in 2016)

Question

I am unable to understand from this - sign comes. Which step I have done wrong?

Answer accepted (score 6)

Relation between Electric field and potential

The relationship between electric field \(\bf E\) and scalar potential \(\varphi\) is given as \[\mathbf E= -\mathbf \nabla\,\varphi\] where \(\mathbf \nabla \equiv \textrm{gradient operator}\;.\)

I am unable to understand from this - sign comes.

It is worthy to quote from Purcell:

The minus sign came in because the electric field points from a region of positive terminal toward a region of negative terminal, whereas the vector \(\mathbf \nabla \varphi\) is defined to so that it points in the direction of increasing \(\varphi\;.\)

The crux of this quote is that the electric field \(\bf E\) points in the direction opposite to the direction of increasing scalar potential \(\varphi\;.\)

Which step I have done wrong?

Remember, change in potential energy \(U\) is given as \[U(x)- U(x_0)= -\int_{x_0}^x \,\mathbf F(x)\,\mathrm dx\;.\]

So, your approach should be the work done against the electric field by an external agent in carrying the charge from point \(\rm A\) to \(\rm B\) and that would imply the work would be given by negative component of the electric field in the direction of motion .

Answer 2 (score 1)

\(W = q(V_{\rm final} - V_{\rm initial}) = q((V+dV) -V)=q\, dV\)

Let \(\vec E = E\,\hat i\) and $ dr = dr i$ where \(E\) and \(dr\) are components in the \(\hat i\) direction.

\(\vec F_{\rm external} = -qE\, \hat i\) and so \(W = \vec F_{\rm external} \cdot d \vec r = -qE\, \hat i \cdot dr \hat i = -qE \, dr\)

Which gives \(E = - \dfrac{dV}{dr}\)

As \(dr\) is the component of \(d\vec r\) in the \(\hat i\) direction \(dr\) can be either positive or negative depending on which way the external force is displaced.

Going back to \(W = -q E \, dr\) and assume that \(E\) is positive and if \(dr\) is positive then the work done \(W\) is negative, ie work is done by the electric field and the potential has decreased.

If \(dr\) is negative (as you have in your diagram) then the work done \(W\) is negative, ie work is done by the external force and the potential has increased.

Answer 3 (score 0)

To place a charge in the vicinity of an electric field, you should do work against the electrostatic force on the charge. This work done to bring a charge \(q\) to an electric field of some other charge configuration from infinity to a distance \(r\), in the field is what we call the potential at the point \(r\).

To do a work to move a charge \(q\) from a potential \(V\) to a small infinitesimal distance where the potential is \(V+dV\), work has to be done against the Electric field.

The force acting on the charge \(q\) is

\[F = qE\]

So work done to move the charge through a potential difference of \(dV\) is:

\[dW = -F.dr\]

the negative sign implies work has to be done against the electrostatic force.

This work is the charge times potential difference between the points a and b (separated by a distance \(dr\))

\[dW = -F.dr = -q(E.dr) = qdV\]

\[or\]

\[ -E.dr = dV\]

\[or\]

\[E = -\frac{dV}{dr}\]

\(\frac{dV}{dr}\) is called the gradient of the scalar potential \(V\).

Hence electric field is the negative gradient of the scalar potential. The negative sign came as a result because the potential difference is the work done per unit charge against the electrostatic force to move a charge from a to b.

However, this equation is valid only for static electrostatic fields.

The error in your math was that to calculate the work done, the displacement should be against the force. So you must put a negative sign as I did:

\[dW = -F.dr\]

49: What is the difference between “kinematics” and “dynamics”? (score 138301 in 2014)

Question

I have noticed that authors in the literature sometimes divide characteristics of some phenomenon into “kinematics” and “dynamics”.

I first encountered this in Jackson’s E&M book, where, in section 7.3 of the third edition, he writes, on the reflection and refraction of waves at a plane interface:

  1. Kinematic properties: (a) Angle of reflection equals angle of incidence (b) Snell’s law
  2. Dynamic properties (a) Intensities of reflected and refracted radiation (b) Phase changes and polarization

But this is by no means the only example. A quick Google search reveals “dynamic and kinematic viscosity,” “kinematic and dynamic performance,” “fully dynamic and kinematic voronoi diagrams,” “kinematic and reduced-dynamic precise orbit determination,” and many other occurrences of this distinction.

What is the real distinction between kinematics and dynamics?

Answer accepted (score 44)

In classical mechanics “kinematics” generally refers to the study of properties of motion– position, velocity, acceleration, etc.– without any consideration of why those quantities have the values they do. “Dynamics” means a study of the rules governing the interactions of these particles, which allow you to determine why the quantities have the values they do.

Thus, for example, problems involving motion with constant acceleration (“A car starts from rest and accelerates at 4m/s/s. How long does it take to cover 100m?”) are classified as kinematics, while problems involving forces (“A 100g mass is attached to a spring with a spring constant of 10 N/m and hangs vertically from a support. How much does the spring stretch?”) are classified as “dynamics.”

That’s kind of an operational definition, at least.

Answer 2 (score 26)

  1. Statics: Study of forces in equilibrium without consideration of changes over time.
  2. Kinematics: Study of motions (position, velocity, acceleration)
  3. Kineto-statics: Study of forces in equilibrium, with the addition of motion related forces (like inertia forces via D’Alembert’s principe) one instant at the time. Results from one time frame do not affect the results on the next time frame.
  4. Dynamics: Full consideration of time varying phenomena in the interaction between motions, forces and material properties. Typically there is an time-integration process where results from one time frame effect the results on the next time frame.

As far as the source if kinematic and dynamic viscocity, I am not sure, and I have wondered this myself. Maybe it stems from the test methods used to measure each property.

Answer 3 (score 16)

Since everybody already gave nice replies to this question, I’ll give a more pragmatic answer:

Don’t worry about it. It is an arbitrary distinction made by humans. Nature doesn’t care if some phenomenon can be described/explained purely from kinematic considerations or not. It’s not a fundamental distinction.

On the other hand, it is a useful distinction. I’m sure you know the distinction somehow implicitly when you solve problems.

Let me give an example in mechanics: you swing a pendulum in a vertical plane, you swing sufficiently fast so that the trajectory is a circle. What is the tension in the pendulum when it passes in the lowest point of the circle. The tension is a dynamical quantity, because it is a force. Now, when you solve the problem, you don’t write down the full equation of Newton and solve them. You use the kinematic information you have about the trajectory: it’s a circle, in the lowest part of the trajectory there is no tangential acceleration, so the acceleration is directed radially inwards and is \(v^2/r\). From this you can find the tension by using purely kinematic considerations and never solving \(\vec{F}=m\vec{a}\) as a differential equation.

I guess you understood that in physics we do this all the time. If we didn’t, many problems would be impossible to tackle without resorting to extensive computer simulations all the time. In most problems we consider, we already have some idea of the kinematics, which permits to reduce the space of acceptable solutions. Sometimes so drastically (but that is only for the simplest problems) that we can solve them by purely kinematic considerations.

50: What is a phase of a wave and a phase difference? (score 135899 in 2018)

Question

What is the meaning of the phase of a wave and phase difference? How do you visualize it?

Answer accepted (score 9)

Here is a graph of a sine function. It is a function of the angle \(\theta\), which goes from \(0\) to \(2 \pi\), and the value of \(\sin(x)\) is bounded by \(0\) and \(1\).

enter image description here

This function of \(\theta\) carried on further on the x-axis repeats itself every \(2\pi\). From the graphic, one can see that it looks like a wave, and in truth sines (and cosines) come as solutions of a number of wave equations, where the variable is a function of space and time.

In the following equation

\[u(x, t) = A(x, t)\sin(kx - \omega t + \phi)\]

\(\phi\) (“phi”) is a “phase.” It is a constant that tells at what value the sine function has when \(t=0\) and \(x=0\).

If one happens to have two waves overlapping, then the \(\phi_1 - \phi_2\) of the functions is the phase difference of the two waves. How much they differ at the beginning (\(x=0\) and \(t=0\)), and this phase difference is evidently kept all the way through.

Answer 2 (score 4)

Let us consider a travelling wave along a very long piece of string. The string will oscillate, and the displacement, \(y\), of the string from the flat position (no wave at all) is given by the following equation assuming that the wave does not have a head start

\(y(x,t)=A_0\sin(\frac{2\pi}{\lambda}x-\frac{2\pi}{T}t)\)

where:

\(A_0\) = the maximum departure of the string from the flat position (called: amplitude)

\(T\) = the time taken by a particle in the string to complete one oscillation, return to its initial position and repeat the oscillation over and over again.

\(\lambda\) = the wavelength of the wave along the string. Imagine this as the distance travelled by the wave in one period, T. Hence one can write the equation \(v=\lambda f\), where \(f\) is the frequency of the oscillation of a particle in the string. You can thing of this as the number of complete cycles the wave is doing in one second.

The Phase:

The phase of the wave is the quantity inside the brackets of the sin-function, and it is an angle measured either in degrees or radians.

\(\phi=(\frac{2\pi}{\lambda}x-\frac{2\pi}{T}t)\)

The phase of a wave is not a fixed quantity. Its value depends on what point along the x-axis and at what time you observe the wave. For example, if you consider two points \(x_1\) and \(x_2\) along the \(x\)-axis at some common instant in time \(t_c\), these two points will have their own phase \(\phi_1\) and \(\phi_2\) given as

\(\phi_1=( \frac{2\pi}{\lambda}x_1-\frac{2\pi}{T}t_c)\)

\(\phi_2=(\frac{2\pi}{\lambda}x_2-\frac{2\pi}{T}t_c)\)

The phase difference the wave has at these two points is

$_2-_1= x_2 -x_1 $

\(\phi_2-\phi_1=\frac{2\pi}{\lambda}(x_2-x_1)\)

The important result here is that the two waves can be:

  1. In phase if \(x_2-x_1=n\lambda\), i.e the wave is doing exactly the same thing at such points along the x-axis.

  2. Out of phase if \(x_2-x_1=(n+\frac{1}{2})\lambda\), i.e one point in the string, \(x_1\) say, is moving upwards while \(x_2\) is moving downwards but symmetrically.

This analysis holds for two coherent waves coming from two coherent sources, travelling different distances and combine at some point that is distance \(x_1\) from one source and distance \(x_2\) from the other source. So you will get constructive interference in case (1), and destructive interference in case (2). This is why you are able to observe the interference pattern.

Answer 3 (score 2)

What is the meaning of phase difference?

It’s an offset, in time or space, of one wave with respect to another

enter image description here

If you make an arbitrary choice and say your wave “starts” when it’s height is 0, then if you start a second wave a short time later it will be out of phase with the first wave. If you start the second wave at a later time that is an exact multiple of the time the first wave takes to repeat, the second wave will be in phase.

51: On a hot day, when it’s cooler outside than in; is it better to put a fan in an open window pointing inwards or outwards? (score 133435 in 2013)

Question

If it’s really hot inside, but cooler outside; what is the best way to place a single fan to try and cool a room down?

I always assumed it would be better pointing inwards (and this thread suggests the same).

However; today I had a thought - if the room had a bad smell in it, we would probably expect a fan blowing air out would remove the smell faster than a fan blowing inwards. Is a smell really any different to heat? Would it be more efficient to remove heat by blowing the fan a different way to a smell?

Answer accepted (score 39)

From a purely temperature point of view, not human perceived level of hotness, it is better to point the fan outward. This is because the fan motor will dissipate some heat, and when the air is blown outwards, this heat goes outside. This is all assuming the room has enough ventillation cracks and the like that the pressure inside is still effectively the same as the pressure outside regardless of what the fan is doing.

Human-perceived hotness is quite different because humans are a heat source themselves and have a built-in evaporative cooling system. Air flow will help with the cooling process and remove heat from the area around the body. A human sitting in a chair in the room with the fan blowing in will feel cooler than with the fan blowing out due to the higher motion of the air in the room.

If the point is to make you in the room feel cooler, blow the air in. The extra power from the fan motor is a miniscule effect in the overall scheme of a normal room in a house and the kind of airflow such a fan would create. Worrying about the fan motor power is really nitpicking, but can be significant for things like cooling chassis of electronics.

Another issue is where the air comes from that enters the room if the fan blows outward. If it is coming from other parts of the same house that are also hot, then that may technically be the most efficient for bringing down the temperature in the whole house, but less useful for just the room in question.

Added

The original question was about blowing air “in” or “out” with a fan. That implied the fan was in a window or such so that inside air would be on one side and outside air on the other.

The more the fan is inside the room, the less effective it will be. Just moving air around inside the room does nothing to cool it. In fact, the extra power from the fan actually heats the room, although very slightly. This can still be useful if the point is to make a human feel cooler.

However, to actually cool the room, the hot air in the room must be swapped for the cooler air outside. With a single fan you only get to force this in one direction, and the other happens thru open windows, doorways, etc. In that sense, the direction of the fan is irrelevant (ignoring the tiny extra power of the fan itself). Cool air will come in, and warm air will go out.

It is best to place the fan in a window or the like where there is a direct connection between the inside and outside air. For best effect, this portal should be sealed around the outside of the fan so that air can’t just loop around the fan and not contribute to the overall movement.

If the fan can’t be placed right at the inside/outside interface, then it will rapidly become less useful as it is moved into the room. 20 cm (8 inches) inside from a window is enough to make a difference. In that case, blowing the air out is better. That is because the exhaust air of the fan has is in a tighter stream and therefore faster and stays together for a short distance. It if exists the room within this short distance, then a good fraction of the air moved by the fan is still moved outside the room. Again, this effect diminishes rapidly with distance. 20 cm might still be somewhat effective if the fan has a considerably larger diameter than that.

If you can make a duct so that all the air moved by the fan is forced to go outside, then the efficiency increase greatly. Howver, the longer the duct, the more resistance to air movement it creates, and the less overall air the fan moves. Usefulness goes down due to the fan moving less air, even though all the moved air goes outside.

Answer 2 (score 7)

If this all reasonably happens in a steady state, so the air pressure inside the house is constant, then any flow of air out of the window has to be matched by an equal flow in somewhere else, so either way the room is going to cool down: a parcel of cool outside air will replace an equal mass of warm inside air, and the direction of the fan doesn’t matter.

The fan is going to set up a current of air through the house regardless.

If there is a heat source inside the house, then a parcel of inside air will still carry the heat from the source to the outside regardless of direction, as long as the source sits in the current of air set up by the fan, but there is a crucial difference in how long the heat from that source remains in the air in the house:

If the house has two open windows and a heat source is near one, clearly a given amount of heat will stay in the house longer if the air current is towards the window that’s farther away, as a parcel of air has to travel a longer distance before it’s outside; so the heat has more time to dissipate, heating the air that’s not directly in the current.

So the house would reasonably be cooler if the fan was pointed at the window closest the heat source.

So if you’re sitting by a window with your PC, you’re reasonably the largest heat source in the room, and the fan should pointed towards the window closest to you so your heat escapes the house quickly. More effective than pointing the fan out the window would be positioning it so you are in between the fan and the window, since the air current will have a cooling effect on your skin.

If the fan is the heat source, then yes, point it at the closest window. But as a general rule, point it towards the window closest to the heat source.

Answer 3 (score 3)

In addition to the already accepted answer of Olin Lathrop, I’d like to mention that for cooling a room on a hot summer evening it is not only important to cool the air inside, but also to cool the wall structures. A fan directed inside will not only improve the heat transfer from human skin but also from the wall structures. Therefore, I assume that the walls would cool faster with an inward directed fan. This would lead to a more effective cooling of your hot room. For concrete walls the heat conduction inside the walls will be weak. So, it is better to cool the walls from inside than from outsinde.

52: Is the spring constant k changed when you divide a spring into parts? (score 132074 in )

Question

I’ve always been taught that the spring constant \(k\) is a constant — that is, for a given spring, \(k\) will always be the same, regardless of what you do to the spring.

My friend’s physics professor gave a practice problem in which a spring of length \(L\) was cut into four parts of length \(L/4\). He claimed that the spring constant in each of the new springs cut from the old spring (\(k_\text{new}\)) was therefore equal to \(k_\text{orig}/4\).

Is this true? Every person I’ve asked seems to think that this is false, and that \(k\) will be the same even if you cut the spring into parts. Is there a good explanation of whether \(k\) will be the same after cutting the spring or not? It seems like if it’s an inherent property of the spring it shouldn’t change, so if it does, why?

Answer accepted (score 38)

Well, the sentence

It seems like if it’s an inherent property of the spring it shouldn’t change, so if it does, why?

clearly isn’t a valid argument to calculate the \(k\) of the smaller springs. They’re different springs than their large parent so they may have different values of an “inherent property”: if a pizza is divided to 4 smaller pieces, the inherent property “mass” of the smaller pizzas is also different than the mass of the large one. ;-)

You may have meant that it is an “intensive” property (like a density or temperature) which wouldn’t change after the cutting of a big spring, but you have offered no evidence that it’s “intensive” in this sense. No surprise, this statement is incorrect as I’m going to show.

One may calculate the right answer in many ways. For example, we may consider the energy of the spring. It is equal to \(k_{\rm big}x_{\rm big}^2/2\) where \(x_{\rm big}\) is the deviation (distance) from the equilibrium position. We may also imagine that the big spring is a collection of 4 equal smaller strings attached to each other.

In this picture, each of the 4 springs has the deviation \(x_{\rm small} = x_{\rm big}/4\) and the energy of each spring is \[ E_{\rm small} = \frac{1}{2} k_{\rm small} x_{\rm small}^2 = \frac{1}{2} k_{\rm small} \frac{x_{\rm big}^2}{16} \] Because we have 4 such small springs, the total energy is \[ E_{\rm 4 \,small} = \frac{1}{2} k_{\rm small} \frac{x_{\rm big}^2}{4} \] That must be equal to the potential energy of the single big spring because it’s the same object \[ = E_{\rm big} = \frac{1}{2} k_{\rm big} x_{\rm big}^2 \] which implies, after you divide the same factors on both sides, \[ k_{\rm big} = \frac{k_{\rm small}}{4} \] So the spring constant of the smaller springs is actually 4 times larger than the spring constant of the big spring.

You could get the same result via forces, too. The large spring has some forces \(F=k_{\rm big}x_{\rm big}\) on both ends. When you divide it to four small springs, there are still the same forces \(\pm F\) on each boundary of the smaller strings. They must be equal to \(F=k_{\rm small} x_{\rm small}\) because the same formula holds for the smaller springs as well. Because \(x_{\rm small} = x_{\rm big}/4\), you see that \(k_{\rm small} = 4k_{\rm big}\). It’s harder to change the length of the shorter spring because it’s short to start with, so you need a 4 times larger force which is why the spring constant of the small spring is 4 times higher.

Answer 2 (score 12)

To supplement the answer by Luboš Motl, I will come to this problem from a Material Science point of view.

What you mean by the inherent property of the string is not the spring constant, in fact, it is Young’s modulus \(E\), which only depends on the properties of a material of a body but not it’s shape.

\[ E = \frac{\text{tensile stress}}{\text{tensile strain}} = \frac{\sigma}{\varepsilon} = \frac{\text{force per area}}{\text{extension per length}} = \frac{F / A}{x / l} = \frac{F l }{x A} \]

Now use this definition to construct the Hooke’s Law: \[ F = \frac{EA}{l} x = k x \] where we see that \[ k = \frac{EA}{l} \]

Now consider what happens when we divide the spring. We change only the length of the spring, whilst keeping A (same cross-section area) and E (same spring, same material) the same. When we make the spring four times shorter we essentially have the following:

\[ k_\text{old} = \frac{EA}{l_\text{old}} = \frac{EA}{4 l_\text{new}} = \frac{1}{4} \frac{EA}{l_\text{new}} = \frac{1}{4} k_\text{new} \]

Note, that this is assuming a rubber band like set-up, where we assume that the spring can be modelled by a uniform bar of elastic material. A more rigorous proof of the dependence of spring constant and the length of the spring would involve the geometry of the spring and various torques on the spring elements when it is under load. However, all this complication just brings additional pre-factors to the spring constant, which are independent of the length of the spring.

A heuristic derivation of the Young Modulus-Force relation

I thought I might talk about why \(E\) is always constant for some type of material. All bonds between atoms can be thought as tiny springs obeying Hooke’s law in case of small displacements.

Because of the energy conservation we already know (the answer by Luboš Motl), that if we connect several springs, then we will change the effective spring constant: \[ k_\text{new} = k / n \] where \(n\) is the number of the springs and \(k\) is the single bond spring constant.

Hence, for the same extension, the force scales with the length of the spring as follows: \[ F = \frac{k x}{n} = k\frac{l_\text{unit}}{l}x = kxl_\text{unit} \times \frac{1}{l} = \text{const.} \times \frac{x}{l} \]

Now, what about connecting the strings in parallel? From the energy conservation argument, we know, that the effective spring constant then will change in a different way: \[ k_\text{new} = kn \] where \(n\) now will be related to the surface area of the material.

Now, the force for the same extension scales as: \[ F = knx = k\rho x \times A = \text{const.} \times Ax \] where \(\rho\) is the density of the springs.

There are only two ways of combining strings (in parallel or in series), hence the overall formula for force must be of a form bellow:

\[ F = E \times \frac{A}{l} x \]

And we can call that unknown constant \(E\) the Young’s modulus, which we know will be specific to the material (i.e. the nature of those chemical bonds). What is more, because of our analysis above, we know that for a given material the remaining unknown quantity \(E\) will be independent of the cross-sectional area, length or extension of the spring.

So with a very simple thinking and some basic knowledge of the energy conservation, we could recover the law I assumed in my first part of explanation.

EDIT: I noticed that there were some errors in the second part of my explanation, hence a complete overhaul. Also, I hope I clarified the first part of the explanation.

Answer 3 (score 3)

For a given spring, \(k\) is a constant, As long as you’re talking about an ideal spring. In other words, the definition of the ideal spring is that it applies the force proportional to its deformation length (at both endings of course).

I’m afraid both you and your professor are wrong. The correct formula should be:

\(k_{new} = k_{orig}*4\)

To show that let’s do the following gedankenexperiment. Suppose you have your original spring in tension. It’s deformed length \(L\), and applies the appropriate force \(F\).

Now imagine that your spring is actually 4 consequently connected springs of length \(L/4\). Each spring is at rest, this means that for every spring the forces applied to both endings are equal. Since all the springs are connected and apply forces on each other - this means that all the forces applied to all the spring endings are the same. And obviously they equal to \(F\).

OTOH each spring is deformed by only \(L/4\). Hence - their “constants” are 4 times higher

53: Relation between water flow and pressure (score 131771 in 2015)

Question

Is there any equation that states the relation between pressure and water flow.

I.e. Let’s say that in 1 hour with 8mca (water collum meters) pressure I obtain 50m3. What if (giving the same contditions) I execute the same test but now with 15mca?

Is that just proportional?

Answer accepted (score 11)

The water velocity into a region with atmospheric pressure goes as the square root of the pressure difference by Bernoulli’s law. So if you quadruple the pressure difference, you get twice the speed.

This is not exact, because as the water fills up your container, unless it is coming from the very top, the building up water will produce a counterpressure.

The law is that when fluid is dynamically moving, the kinetic energy of the water coming out per unit mass, which is half the square of the velocity, is the loss in potential energy of the top of the imaginary water column of height h per unit mass, which is gh. So the velocity at the outlet is:

\[ v=\sqrt{2gh} = \sqrt{2P \over \rho}\]

Exactly the same as if you dropped the water from the top of the imaginary column to the point where you let out the water, and the answer doesn’t depend on whether you have the pressure produced by a column as I imagined it above. This answer conserves energy, since water is disappearing from the top, and appearing at the bottom moving with the same speed as if you dropped it from the top of the column.

Answer 2 (score 7)

All what Ron explained/wrote can be extracted directly from Bernoulli equation

\[\frac{1}{2} \rho v_1^2 + \rho g h_1 + p_1 = \frac{1}{2} \rho v_2^2 + \rho g h_2 + p_2.\]

Say, if you have the huge open container with small hole at the bottom, you have the same (atmospheric) pressure on the both ends \(p_1 = p_2 = 1.03 \times 10^5\)Pa and velocity of water at top of the container is negligible \(v_1 \approx 0\), you get expression

\[v_2 = \sqrt{2 g (h_1-h_2)} = \sqrt{2 g \Delta h},\]

where \(\Delta h\) is height of the water column.

You can obtain relation between water column height and pressure in container from Bernoulli equation too, taking \(v_1 = v_2 = 0\), which gives

\[p_2 - p_1 = \rho g (h_1 - h_2),\]

\[v_2 = \sqrt{\frac{2}{\rho} (p_2-p_1)} = \sqrt{\frac{2}{\rho} \Delta p}.\]

Answer 3 (score -1)

so basically velocity is proportional to the square root of differential pressure!!! and there is another equation which says Q = V x A where Q is the volume flow, V is the velocity of the flow and A is the cross sectional area!

54: Why do diamonds shine? (score 130001 in 2018)

Question

I have always wondered why diamonds shine. Can anyone tell me why?

An image for a diamond shining!

Answer accepted (score 17)

Diamond is one of the hardest material. We know that it’s an allotrope of carbon. A diamond (crystalline in nature) has a three dimensional arrangement of carbon atoms linked to each other by strong covalent bonds. What you’ve shown a round brilliant cut diamond.

Actually, the secret that’s rattling inside a diamond is refraction, total internal reflection (not to be confused with ordinary reflection) & dispersion. The refractive index of diamond is pretty high (2.417) and is also dispersive (coefficient is 0.044). Due to this fact, diamond is an important application in optics.

Consider an ideal cut diamond. I explain according to the figure below. When the light is incident at an angle \(1\), it refracts inside and travels through the lattice. At the surface which separates air & diamond media, the incident angle \(2\) is very well above the critical angle (\(c_a\)) and simultaneously (\(3\) & \(4\)) the reflection takes place at different surfaces of the diamond. Finally, the light refracts out.

The first one shows the mechanism of internal dispersive reflection. The second figure shows the reflections inside ideal cut, deep and shallow cut diamonds.

                                      Reflection

            Cut

Note: For total internal reflection to take place, light must travel from an optically denser medium to a relatively rarer medium. Also, the incident angle should be far high above thee critical angle.

There are youtube goodies regarding the topic…

Answer 2 (score 11)

The phenomenon you’re looking for is called total internal reflection. You could also have a look at this link for more information.

To draw a comparison with glass : In glass (for the most part) when you incident light onto it, it gets refracted on one surface, and gets refracted again at the other surface and leaves the material. This doesn’t always happen, there is some total internal reflection happening, but the ‘critical angle’ for glass is really high so you don’t usually see it happening.

But diamond on the other hand has a really high refractive index (\(\approx 2.4\)) and because of that the critical angle for total internal reflection to occur is much smaller. So a greater percentage of the incident light gets internally reflected several times before it emerges from the diamond, making the diamond look really shiny.

Edit : As @JohnRennie has also mentioned - It’s also the shape that matters to the shiny-ness. Uncut diamond doesn’t look as bright since the angles of incidence isn’t made to be beyond the critical angle.

Answer 3 (score 7)

Diamond has a very high refractive index (about 2.42 compared to about 1.5 for glass). The amount of light reflected at an air/whatever interface is related to the refractive index change at the interface, and generally speaking the bigger the refractive index change the more light is reflected. So if you compare a diamond to a piece of glass cut into the same shape, the diamond will reflect more light and therefore it will sparkle more.

The sparkling isn’t just the high refractive index - the shape matters too. Diamonds are cut so that a light ray falling on the diamond is reflected multiple times inside the diamond and therefore sends light out in all directions.

55: Why does the (relativistic) mass of an object increase when its speed approaches that of light? (score 126958 in 2014)

Question

I’m reading Nano: The Essentials by T. Pradeep and I came upon this statement in the section explaining the basics of scanning electron microscopy.

However, the equation breaks down when the electron velocity approaches the speed of light as mass increases. At such velocities, one needs to do relativistic correction to the mass so that it becomes[…]

We all know about the famous theory of relativity, but I couldn’t quite grasp the “why” of its concepts yet. This might shed new light on what I already know about time slowing down for me if I move faster.

Why does the (relativistic) mass of an object increase when its speed approaches that of light?

Answer accepted (score 10)

The complete relevant text in the book is

The de Broglie wave equation relates the velocity of the electron with its wavelength, \(\lambda = h/mv\) … However, the equation breaks down when the electron velocity approaches the speed of light as mass increases. …

Actually, the de Broglie wavelength should be \[ \lambda = \frac hp, \] where \(p\) is the momentum. While \(p = mv\) in classical mechanics, in special relativity the actual relation is \[ \mathbf p = \gamma m \mathbf v = \frac{m\mathbf v}{\sqrt{1-\frac{v^2}{c^2}}} \] where \(m\) is the rest mass. If we still need to make the equation \(p = mv\) correct, we introduce the concept of “relativistic mass\(M = \gamma m\) which increases with \(v\).

Answer 2 (score 11)

The mass (the true mass which physicists actually deal with when they calculate something concerning relativistic particles) does not change with velocity. The mass (the true mass!) is an intrinsic property of a body, and it does not depends on the observer’s frame of reference. I strongly suggest to read this popular article by Lev Okun, where he calls the concept of relativistic mass a “pedagogical virus”.

What actually changes at relativistic speeds is the dynamical law that relates momentum and energy depend with the velocity (which was already written). Let me put it this way: trying to ascribe the modification of the dynamical law to a changing mass is the same as trying to explain non-Euclidean geometry by redefining \(\pi\)!

Why this law changes is the correct question, and it is discussed in the answers here.

Answer 3 (score 10)

The complete relevant text in the book is

The de Broglie wave equation relates the velocity of the electron with its wavelength, \(\lambda = h/mv\) … However, the equation breaks down when the electron velocity approaches the speed of light as mass increases. …

Actually, the de Broglie wavelength should be \[ \lambda = \frac hp, \] where \(p\) is the momentum. While \(p = mv\) in classical mechanics, in special relativity the actual relation is \[ \mathbf p = \gamma m \mathbf v = \frac{m\mathbf v}{\sqrt{1-\frac{v^2}{c^2}}} \] where \(m\) is the rest mass. If we still need to make the equation \(p = mv\) correct, we introduce the concept of “relativistic mass\(M = \gamma m\) which increases with \(v\).

56: How can two seas not mix? (score 126816 in 2016)

Question

How can two seas not mix? I think this is commonly known and the explanation everyone gives is “because they have different densities”.

enter image description here

What I get is that they eventually will mix, but this process takes a long time.

From what you see in this picture you can see that they have a clear separation line as if you would mix water and oil.

Basically what I’m skeptical about is the clear separation line between them. Putting highly salted water and normal water in the same bowl will cause almost instant mixing. Can you get the same effect as shown in the picture in a bowl at home ?

I’m looking for a more complete answer than just that they have different densities. Thanks.

Answer accepted (score 19)

There are two mechanisms for mixing at a liquid-liquid interface, firstly diffusion and secondly physical agitation.

Diffusion is negligably slow in liquids, it takes days for solutes to travel a few centimetres, so the mixing is dominated by physical agitation e.g. wave action, convention currents, wind mixing etc.

In this particular case it’s hard to judge what effect waves and wind have. The sea looks very calm, so I’d guess that waves and wind have little effect and it’s not that surprising that mixing is slow. I bet that line wouldn’t be as well defined the morning after a storm.

This sort of divison isn’t that unusual. I grew up in Khartoum where the White Nile and the Blue Nile meet, and the division between them remains sharp for miles. Although I don’t have any snaps from that era (I was five :-) the following picture found with google images shows the division nicely.

enter image description here

This can also be seen from space, as in this NASA Earth Observatory Image of the Day:

Answer 2 (score 8)

I asked the oceanologist (Nikolai Koldunov) about this photo. Here is his answer:

In the ocean even if the difference of density is small (e.g., of the order \(0.1\,kg/m^3\)) the process of mixing between two water masses is rather slow (without strong turbulence). The picture probably was taken close to the estuary of a big river. In this case density difference between fresh river water and salty sea water should be of the order of \(20\,kg/m^3\), that is why the boundary is visible so clear (taking in to account calm wind conditions).

I (Grisha) checked the location on Google maps http://goo.gl/xY41z and yes — there are three huge rivers not far from the Flickr geotag — Dangerous River, Ahrnkin River and Italio River. UPDATE. Actually you can clearly see this sharp front on Bing Maps! — http://binged.it/VoGDhh

The front is most likely not strictly vertical — the fresh and warm water runs on top of the cold and salt ocean water, that, in turn, is submerging under the fresh water. Here is the fragment of lecture with the explanation how the vertical front can be formed, e.g. this picture enter image description here Your picture is an example of so called salt wedge estuaries. The classical example of such wedge is the Columbia River.

the Columbia River plume front

In Internet, you can find a lot of such pictures from satellites, here are two examples:

http://www.ifish.net/board/showthread.php?t=293094

http://www.aslo.org/photopost/showphoto.php/photo/271/title/fraser-river-satellite-image/cat/504

Answer 3 (score 6)

Nobody has thus far touched on the probability that freshwater at a river/ocean interface is quite likely to be muddy. What does this mean? It means that the water is likely to contain a stable suspension of silicate micro- or nanoparticles, which are unable to aggregate due to short range electrostatic repulsion. This is what is called a colloid.

The example of the turbidity of fresh versus ocean water was one that came up in a phys chem course I did a few years ago. The clarification of water at a river delta is something that can be seen in satellite imagery worldwide and has less to do with the dilution of muddy water in an ocean and more to do with the destabilising effect of dissolved ions on muddy colloids, which results in a radical reduction of aggregation timescale.

What this means is that in mixing muddy fresh and clear salt water the colloidal mud particles will rapidly aggregate and literally drop out of the water. I would posit that what is being depicted here is actually a phase transition of sorts between ‘stable colloid’ on the left and ‘unstable colloid’ on the right, with an attendant sharp distinction in light scattering off suspended particles. The salinity gradient thus may be somewhat smoother than the boundary would suggest as a fairly small change in salinity may be the difference between muddy water that is indefinitely stable, versus muddy water that will clarify in seconds.

57: Is two cars colliding at 50mph the same as one car colliding into a wall at 100 mph? (score 126620 in 2017)

Question

I was watching a youtube video the other day where an economist said that he challenged his physics professor on this question back when he was in school. His professor said each scenario is the same, while he said that they are different, and he said he supplied a proof showing otherwise.

He didn’t say whether or not the cars are the same mass, but I assumed they were. To state it more clearly, in the first instance each car is traveling at 50mph in the opposite direction and they collide with each other. In the second scenario, a car travels at 100 mph and crashes into a brick wall. Which one is “worse”?

When I first heard it, I thought, “of course they’re the same!” But then I took a step back and thought about it again. It seems like in the first scenario the total energy of the system is the KE of the two cars, or \(\frac{1}{2}mv^2 + \frac{1}{2}mv^2 = mv^2\). In the second scenario, it’s the KE of the car plus wall, which is \(\frac{1}{2}m(2v)^2 + 0 = 2mv^2\). So the car crashing into the wall has to absorb (and dissipate via heat) twice as much energy, so crashing into the wall is in fact worse.

Is this correct?

To clarify, I’m not concerned with the difference between a wall and a car, and I don’t think that’s what the question is getting at. Imagine instead that in the second scenario, a car is crashing at 100mph into the same car sitting there at 0mph (with it’s brakes on of course). First scenario is the same, two of the same cars going 50mph in opposite directions collide. Are those two situations identical?

PS: This scenario is also covered in an episode of mythbusters.

Answer accepted (score 32)

I don’t think any of the other answers have made the following point clear enough, so I am going to give it a try. Both scenarios are very similar before the collision, but they differ greatly afterwards…

From a stationary reference, you see the cars driving towards each other at 50mph, but of course if you choose a reference frame moving with the first car, then the second will be headed toward it at 100 mph. How is this different from the wall scenario?

Well, from a stationary reference frame, after the crash both cars remain at rest, so the kinetic energy dissipated is \(2\times \frac{1}{2}mv^2\).

From the reference frame moving with the first car, the kinetic energy before the crash is \(\frac{1}{2}m(2v)^2=4\times\frac{1}{2}mv^2\), but after the crash the cars do not remain at rest, but keep moving in the direction of the second car at half the speed. So of course the kinetic energy after the crash is \(2\times\frac{1}{2}mv^2\), and the total kinetic energy lost in the crash is the same as when considering a stationary reference frame.

In the car against a wall, you do have the full dissipation of a kinetic energy of \(4\times\frac{1}{2}mv^2\).

Answer 2 (score 7)

Actually, assuming that the oncoming car is the same mass as yours, colliding with an oncoming car at 50 MPH is equal to colliding with an ideal immovable wall at 50 MPH. Consider this:

I’m going to set up one of two experiments. I’m either going to ram car A into car B, both of them moving 50 MPH in opposite directions, or I’m going to ram car A into a solid wall at 50 MPH. However, I’m going to put up a shroud so that you can only see car A, you will be unable to see either car B or the wall, whichever one my coin-flip tells me to use.

Because you can now only see at car A and its contents, how would you tell which experiment I’d decided to do?

Answer 3 (score 6)

Certainly they are not exactly the same - a wall is not the same thing as a car, and a crash is a very complicated physical event. Even if simple calculations involving momentum and energy or descriptions involving reference frames suggest that aspects of a car-car and car-wall collision are the same, the real collisions will be fairly different.

In this case, though, simple considerations do reveal that the car-car crash at 50 mph is almost certainly safer than crashing 100 mph into a wall. Your energy calculation is a fine way to see this.

Another is to consider the car-car collision from a frame co-moving with the second car. In this frame, you’re going 100 mph and crash into a stationary car. So the question is like asking whether it is worse to crash into a stationary wall or a stationary car when going 100 mph (apart from the fact that the movement relative to the road is a little different). Of course crashing into the car is less dangerous than crashing into the wall, confirming your earlier result.

I have often heard the same problem rephrased so that you consider crashing into a wall at 100 mph or crashing into a car when you’re both going 100 mph. It may be that this was the original problem the physics professor mentioned, and it got distorted somewhere in the game of telephone it played since then.

In that scenario, some people say they are equally bad because the energy dissipated per car is the same. Personally, I would probably go for the wall because at least some of the car’s energy should go into the wall, but here the details become important (e.g. what if I fly through the window and then hit the wall?), and the energy alone is not a strong enough difference to say what which is worse. I imagine that either crash is very likely to be fatal at that speed.

Addressing your new question, two cars crashing head-on each at 50 mph is essentially the same as one car going 100 mph and crashing into a stationary car, by the relativity principle. However, relativity is broken by the existence of the road, so to the extent that the cars interact with the road during the collision there may be some differences.

58: What is the simplest way to prove the Earth is round? (score 124738 in 2018)

Question

Assume you’ve come in contact with a tribe of people cut off from the rest of the world, or you’ve gone back in time several thousand years, or (more likely) you’ve got a numbskull cousin.

How would you prove that the Earth is, in fact, round?

Answer accepted (score 48)

Simplest, you say? There are two that strike me as being simple to demonstrate. Luckily someone on the internet has already spent some time to help us here to make these easy to illustrate:

  1. Shadows differ from place to place

Flat earth Curved earth

Eratosthenes carried out this experiment to determine the circumference of the Earth, already assuming its spherical shape; incidentally, the proof of such being consequential of the procedure.

However, a demonstration can be achieved by a simple, local experiment (as opposed to having a party venture to a distant enough point):

Take a piece of card (A3, or so), attach two obelisks to the card by their bases and, with a light source, produce shadows - now, slowly bend the card so that it becomes convex (that is, the side with obelisks attached bulging out) and watch the effect.

  1. You can see farther from higher

Flat earth Curved earth

There are numerous other ways of demonstrating that the Earth is round, or curved, at least, from analysing the center of gravity to simply observing the other round objects that are visible in space; but I believe these illustrations to be the simplest to comprehend.

Images sourced from SmarterThanThat

Answer 2 (score 48)

Simplest, you say? There are two that strike me as being simple to demonstrate. Luckily someone on the internet has already spent some time to help us here to make these easy to illustrate:

  1. Shadows differ from place to place

Flat earth Curved earth

Eratosthenes carried out this experiment to determine the circumference of the Earth, already assuming its spherical shape; incidentally, the proof of such being consequential of the procedure.

However, a demonstration can be achieved by a simple, local experiment (as opposed to having a party venture to a distant enough point):

Take a piece of card (A3, or so), attach two obelisks to the card by their bases and, with a light source, produce shadows - now, slowly bend the card so that it becomes convex (that is, the side with obelisks attached bulging out) and watch the effect.

  1. You can see farther from higher

Flat earth Curved earth

There are numerous other ways of demonstrating that the Earth is round, or curved, at least, from analysing the center of gravity to simply observing the other round objects that are visible in space; but I believe these illustrations to be the simplest to comprehend.

Images sourced from SmarterThanThat

Answer 3 (score 29)

Another way is the triple-right triangle:

  1. You move in a straight line for a long enough distance
  2. Turn right 90° degrees, walk in that same direction for the same distance
  3. Turn again to the right 90° degrees and walk again the same distance

After this you’ll end up at the starting point. This is not possible on a flat surface since you’d just be “drawing” an incomplete square.

enter image description here

Source: http://www.math.cornell.edu (add /~mec/tripleright.jpg to find the image)

59: What’s the standard “roadmap” to learning quantum physics? (score 124685 in 2017)

Question

I’m really interested in quantum physics and would like to learn more. However, I don’t know where to start and in what order I should learn things. So, ideally I’m looking for some sort of roadmap of what to learn. What physics topics do I need to know to start learning about quantum mechanics? (In addition to the mathematical topics mentioned at What is the math knowledge necessary for starting Quantum Mechanics?)

My current knowledge is mostly popular science stuff, like tv shows on Discovery Science and National Geographic Channel. So I have a basic understanding of some basic principals. There’s also a recent lecture from Brian Cox that I have watched which gave a bit more in-depth information.

Answer accepted (score 29)

I would suggest that you don’t do any preliminary reading, and just learn QM directly. There is not much to it, the requisite background is very primitive linear algebra, and Dirac’s book “The Principles of Quantum Mechanics” and Feynman’s “Lectures on Physics Vol III” can be read with Wikipedia help without any prerequisites.

The classical mechanics you need to know is not very sophisticated either— you just need to know Newton’s laws, and how they come from a Lagrangian or Hamiltonian, which is covered in standard sources. You don’t need so much deep stuff, although knowing Poisson brackets is handy for seeing the vestigial quantumness in the classical mechanics structure.

I would suggest reading the following Wikipedia pages for a historical perspective, which helps a lot with historical literature:

This is wrongly left out of most books, and this is a shame. There is no unified presentation of the historical material except on Wikipedia, and this is why these pages are up there. Once you get the historical stuff (it’s not a lot), Dirac gives a conceptually self-contained introduction to the mathematics, the notation, and the physics, while Feynman is path-integral friendly, so you can go on to read Feynman and Hibbs, or Mandelstam and Yourgrau without any delay.

It is usually a waste of time to try to go through prerequisites, as these are usually boring and most of the material doesn’t end up getting used. For QM, you need to come in knowing what a matrix is, and what an eigenvalue is, which is probably best learned from Dirac.

Answer 2 (score 9)

I’m sorry to tell you that anything you learn on TV about quantum mechanics is useless. Not many people in the world understand quantum mechanics and many of thouse who claim to, are either naive or liers. I don’t understand quantum mechanics, but I could calculate you things, even explain a model of the hydrogen atom. Quantum mechanics is not “just” understanding, it’s about BREAKING all your concepts, it will destroy every notion you have up to this moment. With this said, I recomend you this college-like path:

  • Algebra and analysis: vectorial spaces, integrals, diferential equations are BASIC, so go deep into them, specially vectorial spaces so you can “upgrade” to tensors from there.

  • While learning the basics of algebra and analysis you should learn about newtonian mechanics and maxwell electromagnetics (in that order).

  • Once you have donde it, learn everything about newtonian mechanics again using the lagrange and hamilton formulation. In that moment you will be able to mix maxwell electromagnetics and newtonian physics.

  • If you really want to go deep you should make some digging in the special relativity, but that is somehow optional… (not if you want real understanding).

  • Now start with old quantum mechanics books, not very similar to modern physics, but absolutly necesary if you want to understand WHY did we go the way we went.

  • In this point you actually know something about QM and you are ready to learn the Dirac principles and finally destroy every concept about reality you had before starting this path.

This could be a 2-4 years project, but it is possible. I would recomend you go to lessons (in my country you can go without paying, but they won’t give you the certificate). Good luck.

Note: after ALL THAT, you will get a lot of profit by reading the EPR paradox, Bohr & Heisenbrerg opinion about quantum mechanics, and of course the Bell article about entangled particles. Don’t get mixed up by von Neumann’s principles, in my humble opinion I think he was way too mathematician to understand the deep of bohr and heisenberg thinking. And by the way, if you ever don’t understand something, you are doing it right.

Answer 3 (score 8)

I think the usual way as they take it at a university is pretty much the time efficient way to go. The math and\(\\) at least\(\\) Classical Mechanics will be necessary.

Nobel Laureate Gerard ’t Hooft at one put together a selection of introductions in a roadmap style

\[\text{click}\]

The navigation system on that site doesn’t work for me anymore. But the important part, the links on the right hand side, work.

60: What is the difference between stress and pressure? (score 124136 in 2014)

Question

What is the difference between stress and pressure? Are there any intuitive examples that explain the difference between the two? How about an example of when pressure and stress are not equal?

Answer accepted (score 22)

Pressure is defined as force per unit area applied to an object in a direction perpendicular to the surface. And naturally pressure can cause stress inside an object. Whereas stress is the property of the body under load and is related to the internal forces. It is defined as a reaction produced by the molecules of the body under some action which may produce some deformation. The intensity of these additional forces produced per unit area is known as stress (pretty picture from wikipedia):

stress


EDIT PER COMMENTS

Overburden Pressure or lithostatic pressure is a case where the gravity force of the object’s own mass creates pressure and results in stress on the soil or rock column. This stress increases as the mass (or depth) increases. This type of stress is uniform because the gravity force is uniform.

http://commons.wvc.edu/rdawes/G101OCL/Basics/earthquakes.html

Included in lithostatic pressure are the weight of the atmosphere and, if beneath an ocean or lake, the weight of the column of water above that point in the earth. However, compared to the pressure caused by the weight of rocks above, the amount of pressure due to the weight of water and air above a rock is negligible, except at the earth’s surface. The only way for lithostatic pressure on a rock to change is for the rock’s depth within the earth to change.

Since this is a uniform force applied throughout the substance due to mostly to the substance itself, the terms pressure and stress are somewhat interchangeable because pressure can be viewed as both an external and internal force.

For a case where they are not equal, just look that the image of the ruler. If pressure is applied at the far end (top of image) it creates unequal stress inside the ruler, especially where the internal stress is high at the corners.

Answer 2 (score 11)

Given a stress tensor \(\mathbf{\sigma}\), which has 9 components in general, the pressure (in continuum mechanics at least) is defined as \(P = 1/3 tr(\mathbf{\sigma})\).

So the pressure at a point in the continuum is the average of the three normal stresses at the point. The off-diagonal terms manifest as shear stress.

It’s hard to say “stress” without being more specific in your question because stress is not a scalar. Pressure is always different from stress, but the two are related.

Answer 3 (score 4)

The difference between stress and pressure has to do with the difference between isotropic and anisotropic force. There’s a Wikipedia section on the decomposition of the Cauchy stress \(\boldsymbol{\sigma}\) into “hydrostatic” and “deviatoric” components, \[\boldsymbol{\sigma}=\mathbf{s}+p\mathbf{I}\] where the pressure \(p\) is \[p=\frac{1}{3}\text{tr}(\boldsymbol{\sigma})\] where \(\mathbf{I}\) is the \(3\times 3\) identity matrix, and where \(\mathbf{s}\) is the traceless component of \(\boldsymbol{\sigma}\).

The linked article actually gives a pretty good intuitive explanation of \(p\mathbf{I}\):

(From article) A mean hydrostatic stress tensor \(p\mathbf{I}\), which tends to change the volume of the stressed body.

This follows since the surface force experienced by a plane with normal vector \(\mathbf{n}\) is given by \[\mathbf{T}^{(\mathbf{n})}=\mathbf{n}\cdot\boldsymbol{\sigma}\] which for a purely hydrostatic stress becomes \[\mathbf{T}^{(\mathbf{n})}=\mathbf{n}\cdot p\mathbf{I}=p\mathbf{n}\] which points in the same direction as the normal to the plane. This basically means that a cube of material will want to expand like a ballon if \(p&gt;0\), and contract if \(p&lt;0\).

Meanwhile, the deviatoric component means that there are forces at play which don’t just tend to expand or contract things, such as shear forces.

How about an example of when pressure and stress are not equal?

In a solid, pure shear waves can exist. Unlike in acoustic pressure waves, shear waves have constant pressure; the forces that propagate the wave are not due to pressure, but are due to shear strain.

61: Would you die if you put your hands on a powerline? (score 123153 in 2012)

Question

You know how birds perch on powerlines without getting electrocuted? What if by some chance that I find myself falling and I grab on one of them? Let’s say both of my hands are on the same line, would i get electrocuted?

I am thinking I won’t because the current won’t rush through me and I won’t be part of the circuit - me - powerline.

How does the ground play a role in this? I’ve heard people say that the ground creates a potential difference, but how? There is only voltage across the powerlines, the pole connecting to the ground is wood, an insulator?

Thanks

Answer accepted (score 12)

Hanging from a power line you should be as safe as a bird.

The voltage difference is between the lines (e.g. in a 3-phase system) and between the line and ground. This voltage difference exists across the insulators and pole, as well as through the air to ground. These voltage differences are obviously small enough to avoid striking an arc, hence no current flows between the lines or between line and ground. If you are hanging from one line, there is no change in the separation between the lines (unless you are swinging wildly) and hence again no current flows between the lines. As the distance between the lines will usually be smaller than that between your feet and the ground, again no current will flow, and you will be safe. Note that, if this distance were too small, you would not be safe standing under the line either! 

Your real problem will be to get down from the line. Unless someone can switch off the power, or you are an acrobat who can jump from the lines to the pole, you will need to touch both the line and the pole simultaneously. If you are hanging from high voltage lines (tens of thousands, not hundreds), touching even a wood pole at the same time as the power line may kill you, unless the wood is extremely dry. Even though wood is considered a poor conductor, when it gets damp its conductivity increases dramatically.

Answer 2 (score 12)

Hanging from a power line you should be as safe as a bird.

The voltage difference is between the lines (e.g. in a 3-phase system) and between the line and ground. This voltage difference exists across the insulators and pole, as well as through the air to ground. These voltage differences are obviously small enough to avoid striking an arc, hence no current flows between the lines or between line and ground. If you are hanging from one line, there is no change in the separation between the lines (unless you are swinging wildly) and hence again no current flows between the lines. As the distance between the lines will usually be smaller than that between your feet and the ground, again no current will flow, and you will be safe. Note that, if this distance were too small, you would not be safe standing under the line either! 

Your real problem will be to get down from the line. Unless someone can switch off the power, or you are an acrobat who can jump from the lines to the pole, you will need to touch both the line and the pole simultaneously. If you are hanging from high voltage lines (tens of thousands, not hundreds), touching even a wood pole at the same time as the power line may kill you, unless the wood is extremely dry. Even though wood is considered a poor conductor, when it gets damp its conductivity increases dramatically.

Answer 3 (score 2)

There actually is a current that moves through your body, albeit it not enough to hurt you. You don’t need to actually touch the powerline, just standing near it will cause a current to flow in your body. If we model a single powercable hanging a height H above the ground as an electric charged cable of infinite length hanging a height \(H\) above a perfect conductor, then the electric potential relative to the ground a distance R below the cable is:

\[V(R) = V_0 \frac{\log\left(\frac{2 H}{R}-1\right)}{\log\left(\frac{2 H}{R_{0}}-1\right)}\]

where \(R_0\) is the radius if the cable and \(V_0\) is the voltage relative to the ground. E.g. if you take \(R_0 = 10\) cm , \(V_0 = 10^6\) Volt then the optential difference between the head and the feet of someone 1.8 meters tall a meter below the powerline will be approximately 166,000 Volt. Now, this voltage has a frequency of 50 Hz and the human body has a finite resistance and electric capacity, so currents will flow in the body in response to the electric field. But these currents are not very strong. Even if the body had zero resistance so its potential were constant, then that would only require a small amount of charge to be moved around to neutralize the 166,000 Volt.

62: How does gravity work underground? (score 123073 in 2014)

Question

Would the effect of gravity on me change if I were to dig a very deep hole and stand in it? If so, how would it change? Am I more likely to be pulled downwards, or pulled towards the edges of the hole? If there would be no change, why not?

Answer accepted (score 71)

The other answers provide a first-order approximation, assuming uniform density (though Adam Zalcman’s does allude to deviations from linearity). (Summary: All the mass farther away from the center cancels out, and gravity decreases linearly with depth from 1 g at the surface to zero at the center.)

But in fact, the Earth’s core is substantially more dense than the outer layers (mantle and crust), and gravity actually increases a bit as you descend, reaching a maximum at the boundary between the outer core and the lower mantle. Within the core, it rapidly drops to zero as you approach the center, where the planet’s entire mass is exerting a gravitational pull from all directions.

The Wikipedia article on “gravity of Earth” goes into the details, including this graph:

Gravitational acceleration vs. depth

“PREM” in the figure refers to the Preliminary Reference Earth Model.

Larger versions of the graph can be seen here

And there are other, smaller, effects as well. The Earth’s rotation results in a smaller effective gravity near the equator, the equatorial bulge that results from that rotation also has a small effect, and mass concentrations have local effects.

Answer 2 (score 20)

Assuming spherically symmetric mass distribution within Earth, one can compute gravitational field inside the planet using Gauss’ law for gravity. One consequence of the law is that while computing the gravitational field at a distance r < R (with R being the radius of the Earth), one can ignore all the mass outside the radius r from the center

\[\begin{equation} \oint_{S_r} g_r \cdot dA = -G \int_{B_r} \rho dV \end{equation}\]

where gr is the gravitational field at distance r from Earth’s center, ρ is Earth’s density, Sr is the sphere of radius r centered on Earth’s center of mass and Br is the volume enclosed by Sr. Assuming that ρ only depends on the distance r from the center of the Earth, we can simplify this as follows

\[\begin{equation} \oint_{S_r} g_r \cdot dA = -4\pi G \int_0^r \rho(s) ~s^2ds \end{equation}\]

\[\begin{equation} g_r = -\frac{G}{r^2} \int_0^r \rho(s)~s^2ds \end{equation}\]

Setting Mr to denote the portion of Earth’s mass enclosed within Sr, we can rewrite the last formula as

\[\begin{equation} g_r = -\frac{GM_r}{r^2} \end{equation}\]

Now, letting ρr denote the average density of the portion of the Earth enclosed within Sr, we have

\[\begin{equation} g_r = -\frac{4 \pi G \rho_r r}{3} \end{equation}\]

The conclusion is that the gravity inside Earth depends roughly linearly on the distance from the center of the planet and density variations account for the deviations from linearity.

An interesting way to visualize this is to think of an over 12,700 kms long elevator from Hamilton, New Zealand to Cordoba, Spain. During the travel (which at average speed of 200km/h would take almost three days) passengers would feel gradual and roughly linear decrease in weight and in the middle of the journey would experience weightlessness followed by gradual increase in weight as they near the surface on the other side. Also, around the midpoint of the journey the floor and the ceiling would swap.

Answer 3 (score 6)

Acceleration due to gravity at depth d below the earth’s surface is given by:

\(g(d) = G M_e \dfrac{R_e - d}{R_e^3}\)

Where,

G = Universal gravitational constant
Me = Mass of the earth
Re = Radius of the earth
d = depth below the earth's surface

enter image description here

63: How to calculate gauge pressure at water depth? (score 122216 in 2014)

Question

I am creating a program that will calculate danger at water depths for sensitive equipment and maybe humans.

How do I calculate the gauge pressure at a depth \(D\). Lets say \(D = 5000\) meters.

My guess is to calculate the atmospheric pressure so I can find the \(Pa\).

Any help is appreciated.

Answer accepted (score 0)

Contrary to the accepted answer:

The gauge pressure, \(P_{gauge}\) at a depth \(H\) in a fluid of density \(D\), where the acceleration of gravity is \(g\), is given by:\[P_{gauge}=DgH\] since the gauge pressure is the difference between the absolute pressure and the current atmospheric pressure.

The absolute pressure is the gauge pressure, measured against vacuum. So, starting in space, you descend to sea level through the atmosphere, and experience an increase to (naturally) one atmosphere absolute pressure. As you go down through the water to any depth, the hydrostatic pressure gradually adds to the surface atmospheric pressure. So the absolute pressure at depth, \(P_{abs}\) is given by:\[P_{abs}=P_{atmos}+DgH\]

Answer 2 (score 0)

Contrary to the accepted answer:

The gauge pressure, \(P_{gauge}\) at a depth \(H\) in a fluid of density \(D\), where the acceleration of gravity is \(g\), is given by:\[P_{gauge}=DgH\] since the gauge pressure is the difference between the absolute pressure and the current atmospheric pressure.

The absolute pressure is the gauge pressure, measured against vacuum. So, starting in space, you descend to sea level through the atmosphere, and experience an increase to (naturally) one atmosphere absolute pressure. As you go down through the water to any depth, the hydrostatic pressure gradually adds to the surface atmospheric pressure. So the absolute pressure at depth, \(P_{abs}\) is given by:\[P_{abs}=P_{atmos}+DgH\]

64: Why is work a scalar and not a vector? (score 121231 in 2018)

Question

Work (in physics) is a scalar. Why is it not a vector?

Answer accepted (score 15)

It’s defined as a dot-product (or scalar product) of force and displacement, both of which are vectors.

A scalar product of two vectors gives a scalar result (aptly named!).

\[dW = \vec{F}\cdot\vec{S} = {\|F\|}{\|S\|}\cos\theta\] (\(\theta\) being the angle between the vectors).

No direction, only magnitude.

Thinking logically, what would be the direction of work, anyway? You may say, “In the direction of displacement!”, but then why not in the direction of force? And if you say the direction of both, well then, it isn’t always the same! A force can do work on a body even displacing at an angle to the direction of force (\(\theta\)!).

=>Note that when \(\theta\) is \(90^\circ\), the result will be zero (\(\cos 90^\circ = 0\)). When force and displacement are perpendicular, the force does no work on the body!


Edit: As said by @anna: Please also note that work is part of the energy in a system (work and energy) and energy is a scalar. If it were not so we would not be talking of “conservation of energy” as an experimental observation. Energy is a scalar.

Answer 2 (score 2)

Another way to see this is to test how it transforms under rotation of coordinate axes. Vectors and scalars have distinct transformation patterns. For simplicity if we assume a three dimensional Cartesian coordinate system then knowing that both force and displacement are vectors, i.e., their components transform under same rotation as: \[A_i \rightarrow A_i^{\prime}= \sum_{j=1}^{3}a_{ij}A_j\] where the \(a_{ij}\)’s are elements of an orthogonal matrix with determinant=+1, one can check that work done \(W \rightarrow W^{\prime}=W\), i.e., work done remains invariant under rotation of coordinate axes. In other words, work done due to displacement caused by a force is a scalar quantity.

Answer 3 (score 0)

In addition to the scalar product reason mentioned above ,I would go a step back and explain it based on the why we need to define vector quantity. Some quantity could be measured in + and -ve like temperature , distance etch because we can represent it in terms of positive and negative value and it give us full information. But quantity like displacement need us to define direction as displacement of 20m doesn’t specify which direction the 20m is , which is necessary as displacement is shortest distance between two points and it couldn’t be in any direction. For Work , for example, if we do 10 J work when we push object to 10m in east and then when we push 10 m in West with say 15 Joule, the total work would be 25 Joule here we are not bothered here about direction as it doesn’t add any extra information. (Please note if force is in opposite direction to displacement then work is negative) .

65: Different batteries connected in parallel (score 120442 in 2013)

Question

If we have 2 batteries one of emf x and the other is of emf y and we connect them in series we get an effective emf of x+y.

But what if we connect them in parallel, how to calculate the emf now?

Answer accepted (score 17)

In ideal circuit theory, the parallel connection of two voltage sources results in an inconsistent equation, e.g., a 3V and 2V source connected in parallel, by KVL, gives the equation: 3 = 2.

In the real world, batteries are not ideal voltage sources; batteries can supply a limited current and the voltage across the battery does, in fact, depend on the supplied current. This is represented as a series internal resistance.

So, the circuit diagram for the two batteries in parallel must include the internal resistances which will give consistent results.

The bottom line is that one of the batteries will supply power to the other and it is possible that one or both batteries will be damaged and, possibly, violently so.

Answer 2 (score 14)

You should not connect different batteries in parallel.

If you do, the battery with the highest voltage will discharge into the other one, until they end up with equal voltages. If the second battery (the lower voltage one) is a rechargeable, then it will be charged by the first one, again until the two have the same voltage. In this case the end voltage will be intermediate between the two starting voltages.

The current flowing between the batteries during this process will be quite high: it is equal to the different between the 2 voltages divided by the sum of the internal resistances of the batteries: \[I=(V1-V2)/(R1+R2)\]

This current may damage one or both of the batteries.

Answer 3 (score 3)

The other answers are good (especially the I = (V1 - V2) / (R1 + R2) equation that we will use) but I just wanted to give you ballpark estimates of some numbers that you can expect to see. Imagine that you are going to do this to a 9V battery and a 1.2V AA battery, then:

V1 - V2 = 7.8V

For internal resistances, it’s hard to put ballpark numbers in the field but based on this excellent document from energiser then at most you are going to see internal resistances of 1.0 ohms. At room temperature you are more likely to see resistances of around 0.1 ohms. Now, if we assume that the internal resistances are roughly the same for both batteries then we can say that:

R1 = R2 (given)
I = (V1 - V2) / (2R1)

Which we will now use for both possible internal resistances:

I(R1 = 1.0) = 7.8 / 2 = 3.9A
I(R1 = 0.1) = 7.8 / 0.2 = 39A

So, as you can see, somewhere around 3.9 - 39 amps of current are going to be generated very quickly. Using p = vi we can then see that:

P(R1 = 1.0) = 30.42W
P(R1 = 0.1) = 304.2W

Which is a lot of energy being released very quickly in a very small package. Which is probably why it is not unexpected to see such violent results. You’re going to boil your batteries pretty quickly with all of that heat.

P.S. I’m just doing this by ballparking off the top of my head very quickly but the rough numbers make sense to me. I hope this helped give a better visual sense of what happens to the poor batteries when you do this.

66: Why does the comb attract the pieces of papers if they’re neutral? (score 118340 in )

Question

When we rub our hairs with a comb, and then try to attract small pieces of paper, they’re attracted by the comb. The pieces of the paper were not electrified before they were attracted. Then they might be neutral. Why does the comb attract the pieces of the paper if they’re neutral, and have no influence of a specific charge on them (Pieces of paper).

Answer accepted (score 8)

This is because the neutrality of polarity can be changed by electric field in this case. When you create - charge in the comb and you expose the pieces of paper to the electric field created by the charge, you will polarise them so that the part closer to the comb will be + and the other will be -. Explaining image - the brush and some papers

Here, see the electric field. The same polarities do not like each other: enter image description here

Answer 2 (score 0)

Basically, when an electrification takes place, electrons are not created but they are transferred….. in the case of comb attracting tiny tiny bits of papers when rubbed with dry hair is because electrons from the dry hair gets transferred to the comb and now the comb induces a dipole in the bitties of paper and the paper gets attracted…

Answer 3 (score 0)

Basically, when an electrification takes place, electrons are not created but they are transferred….. in the case of comb attracting tiny tiny bits of papers when rubbed with dry hair is because electrons from the dry hair gets transferred to the comb and now the comb induces a dipole in the bitties of paper and the paper gets attracted…

67: How can water evaporate at room temperature? (score 118097 in 2014)

Question

Boiling point of water is 100 degree Celsius. The temperature at which water in liquid form is converted into gaseous form. Then how it possible for water to evaporate at room temperature?

Answer accepted (score 52)

Think of temperature as average kinetic energy of the water molecules. While the average molecule doesn’t have enough energy to break the inter-molecular bonds, a non-average molecule does.

Water is a liquid because the dipole attraction between polar water molecules makes them stick together. At standard atmospheric pressure (acting somewhat like a vice), you need a comparatively large temperature of 100°C (translating to a high average energy distributed among the micsroscopic degrees of freedom, most relevantly the kinetic ones) for water molecules to break free in bulk, creating bubbles of water vapour within the liquid.

However, at the surface of the liquid, lone molecules may end up getting enough kinetic energy to break free due to the random nature of molecular motion at basically any temperature. On the flip side, water molecules in the atmosphere may enter the liquid at the surface as well, which is measured by equilibrium vapour pressure.

Answer 2 (score 10)

Imagine spinning a roulette wheel, but instead of dropping in one ball, you drop in 100. They all rattle around at different speeds, like the molecules in water. You can cool them down by spinning the wheel slower, so they bounce about less; heat them up by spinning faster so they bounce more; you can freeze them by stopping the wheel and waiting till they’re all stationary; and you can boil them by spinning the wheel so fast that they all fly out of the top.

Now pick up all the balls and throw them back in with the wheel spinning at a moderate speed. If you watch for a while you’ll see that although the average speed of the balls is below the “boiling point” where they all fly out the top, every now and again one ball will ricochet off another with enough force to send it flying out of the wheel. If you watch for long enough eventually all the balls will be gone. Your balls just evaporated.

Answer 3 (score 6)

Temperature is a measure for how much kinetic energy the molecules in a substance have. If the temperature is high, they are moving pretty fast, if the temperature is low, they are moving a lot slower. If molecules are moving slow, they bundle up and you get a solid. Once you heat it up a bit, the substance starts to become liquid. When you heat it up even more, the molecules will start to move so fast they will spread out into the entire space (gas).

However, this is all averages. In a liquid all molecules are moving, some faster than others. If a molecule happens to break through the ‘surface’ of the water, it’ll have escaped the inter-molecular forces holding the water together and it’ll be evaporated. This can also happen with solids, there it is called sublimation.

If you’re heating up water, you’re adding energy so this process will start to go faster. Then at boiling point, you’ll reach the point where molecules will want to start moving so fast they start to form gas bubbles inside the liquid.

disclaimer: this is just what I remember from high school.

68: What does the wind speed have to be to blow away a person? (score 117232 in 2012)

Question

With the approaching hurricane, I am curious about what would happen if I go outside, in particular whether the wind gusts might be fast enough to blow me away. How fast would the wind have to be to blow away a person?

Answer accepted (score 15)

Let’s do math before we look for information. First, what is the force that keeps you anchored to the ground? This is the force of static friction, which is \(F_s = \mu m g\). What is this force opposing? The force of drag from the wind pushing on you. For the velocities involved (a high Reynolds number regime), the drag is quadratic in velocity, \(F_d = \frac{1}{2} \rho v^2 C_d A\), where \(\rho\) is the density of atmosphere, \(v\) is the velocity, \(C_d\) is a dimensionless drag coefficient, and \(A\) is your body’s cross-sectional area. So let’s set the forces equal and solve for the velocity:

\[v^2 = \frac{2\mu m g}{\rho C_d A}\]

We’ll be very ballpark about this. The density of air is \(\rho \approx 1.2 \text{ kg/m}^3\). I’ll say your mass is \(50 \text{ kg}\). Per this paper, we’ll say \(C_d A \approx 0.84 \text{ m}^2\). Per this thread, we’ll say \(\mu = 0.4\).

Putting all these numbers in gives us \(v \approx 20 \text{ m/s}\), or about 45 mph. But, this is just enough to make your body move (compared to standing still on the ground). It would take at least a 70 mph wind to overcome the force of gravity, and even then, that’s assuming the wind keeps pushing on you with your body turned to face it (or away from it), not sideways. Hard thing to guarantee given how the body is likely to tumble or spin.

It’s hard to be exact about this sort of thing, but let’s just say this: going out in this kind of storm is a bad idea. The numbers aren’t clear-cut enough to say you’re safe, so better safe than sorry.

69: Why change in internal energy is zero in isothermal process (score 114900 in 2014)

Question

In isothermal process \(\Delta U =0\). But I am having trouble understanding it.

Say we have an ideal gas, and say my temperature is constant but I move the pressure, volume from $(P, V) (P-dP, V+dV) $. So the volume has expanded and system has done some work to the surrounding. So my work is non-zero.

So how come \(\Delta U=0\)? I am really confused here.

Answer accepted (score 14)

It is not generally true that \(\Delta U = 0\) in an isothermal process.

An ideal gas by definition has no interactions between particles, no intermolecular forces, so pressre change at constant temperature does not change internal energy.

Real gases have intermolecular interactions, attractions between molecules at low pressure and repulsion at high pressure. Their internal energy changes with change in pressure, even if temperature is constant.

For an ideal gas, in an isothermal process, \(\Delta U = 0 = Q-W\), so \(Q=W\).

Answer 2 (score 8)

In Isothermal process the temperature is constant.

The internal energy is a state function dependent on temperature. Hence, the internal energy change is zero.

For the process you are describing the work is done by the system, but had you not supplied heat, then the temperature would have dropped. That is a adiabtic cooling process. If no heat is supplied and internal energy is not maintained at the same level, then the process wont be a isothermal process.

Answer 3 (score 3)

Internal energy is due to motion of particles in a system. As internal energy depends on temperature. As we know temperature in isothermal process is constant so the internal energy will also be constant thus the change in internal energy will be zero.

70: Why do birds sitting on electric wires not get shocked? (score 114053 in 2018)

Question

When we touch electric wires, we get shocked. Why don’t birds sitting on electric wires not get shocked?

Answer accepted (score 10)

You will not get a shock unless you complete the circuit to ground. This is why power lines can be worked on while live, from a helicopter:

Helicopter Power Line Maintenance

Answer 2 (score 4)

Because birds stand on a same electric wires, they are at an electric potential. The reason why people can be shocked, is because a person’s body is a conductor, and when we touch the wire, there is a high electric potential on electric wire. But when birds stand on wires, there are always on the same electric potential, so they won’t get shocked.

Answer 3 (score 3)

A simple googling would’ve provided you an answer. We won’t get a shock if we fly or when we aren’t grounded… Because, Current flows only in closed circuits (Maybe in Plasma “as an open”). A bird sitting in the wire doesn’t form a complete circuit in order for the current to flow. In other words, Birds have their feet in the same wire (It also has high resistance, Now, that’s another problem).

I’ve seen many days, birds (like “crows” in our country) get shocks. Sometimes birds too touch the Phase and neutral (or Earthing) wires thereby creating a largest potential difference, get toasted & fall down. Not all birds are clever.

Imagine: Take a circuit with battery. Connect one terminal of the battery to a resistance and a galvanometer. Leave the other terminal open. If it shows deflection, then you’d solve your question and you’d definitely get a Nobel…

Have a look over this overview of the topic…

71: Do low frequency sounds really carry longer distances? (score 112259 in 2014)

Question

It is a common belief that low frequencies travel longer distances. Indeed, the bass is really what you hear when the neighbor plays his HiFi loud (Woom Woom). Try asking people around, a lot of them believe that low sounds carry longer distances.

But my experience isn’t as straightforward. In particular:

  • When I stand near someone who’s listening loud music in headphones, it is the high pitched sounds that I hear (tchts tchts), not the bass.
  • When I sit next to an unamplified turntable (the disc is spinning but the volume is turned off), I hear high pitched sounds (tchts tchts), not the bass.

So with very weak sounds, high frequencies seem to travel further?

This makes me think that perhaps low frequencies do not carry longer distances, but the very high amplitude of the bass in my neighbor’s speakers compensates for that. Perhaps also the low frequencies resonate with the walls of the building? Probably also the medium the sound travels through makes a difference? Or perhaps high frequencies are reflected more by walls than low frequencies?

I found this rather cute high school experiment online, which seems to conclude that low and high frequencies travel as far, but aren’t there laws that physicist wrote centuries ago about this?

Answer accepted (score 70)

Do low frequencies carry farther than high frequencies? Yes. The reason has to do with what’s stopping the sound. If it weren’t for attenuation (absorption) sound would follow an inverse square law.

Remember, sound is a pressure wave vibration of molecules. Whenever you give molecules a “push” you’re going to lose some energy to heat. Because of this, sound is lost to heating of the medium it is propagating through. The attenuation of sound waves is frequency dependent in most materials. See Wikipedia for the technical details and formulas of acoustic attenuation.

Here is a graph of the attenuation of sound at difference frequencies (accounting for atmospheric pressure and humidity):

sound absorption at various frequencies

As you can see, low frequencies are not absorbed as well. This means low frequencies will travel farther. That graph comes from this extremely detailed article on outdoor sound propagation.

Another effect that affects sound propagation, especially through walls, headphones, and other relative hard surfaces is reflection. Reflection is also frequency dependent. High frequencies are better reflected whereas low frequencies are able to pass through the barrier:

sound frequency transmission versus reflection

This is and frequency-based attenuation are why low-frequency sounds are much easier to hear through walls than high frequency ones.


Frequency Loudness in Headphones: The above description apply to sounds that travel either through long distances or are otherwise highly attenuated. Headphones start off at such low intensities already they don’t travel long enough distances for attenuation to be a dominate factor. Instead, the frequency response curve of the human ear plays a big role in perceived loudness.

The curves that show human hearing frequency response are called Fletcher–Munson curves:

Fletcher-Munson_curves

The red lines are the modern ISO 226:2003 data. All the sound along a curve is of “equal loudness” but as you can see, low frequencies must be much more intense to sound equally as loud as higher frequency sounds. Even if the low frequencies are reaching your ear, it’s harder for you to hear them.

Headphone sound is doubly compounded by the difficulty of making headphones with good low-frequency response. With loudspeakers you can split the job of producing frequencies among a subwoofer, a midrange speaker, and a tweeter. For low frequencies subwoofers are large and have a resonating chamber which simply isn’t an option with headphones that must produce a large range of sound frequencies in a small space. Even a good pair of headphones like Sennheiser HD-650 struggle with lower frequencies:

headphones frequency response

So if it sounds like high frequencies travel farther with headphones, it’s because headphones are poor at producing low frequencies and your ear is poor at picking them up.

Answer 2 (score 8)

In addition the points made in Brandon enright’s excellent answer, you need to consider that sound sources aren’t really idealised point sources. Depending on the application, you can shape the speaker(s) so you have cylindrical waves, or dipole waves, or sections of a monopole wave …, or a combination of several patterns.

Normal headphones are basically dipole speakers, and especially for bass frequencies (wavelength much larger than the speakers) this describes their behaviour well. So the amplitide decreases \(\propto 1/ r^4\). At higher frequencies, they also have some monopole components which decay more slowly, with the familiar inverse-square. So if you’re listening from far away, you’ll mostly hear those treble frequencies and little or no bass. OTOH, while wearing the headphones there’s little difference since you’re in the near field where neither frequency range has decayed substantially at all.

Answer 3 (score 2)

Another thing that happens that can lead you to think that low frequency sounds attenuate quicker is that if you record yourself one time being close to the microphone and another time being farther away, you’ll notice that the farther you are the more the lowest frequencies are picked up. This is due to the proximity effect and not to the low frequency sounds being attenuated.

http://en.wikipedia.org/wiki/Proximity_effect_(audio)

72: How does this “simple” electric train work? (score 111994 in 2016)

Question

In this YouTube video, a dry cell battery, a wound copper wire and a few magnets (see image below) are being used to create what can be described as “train”. It looks fascinating but how does this experiment work?

Update 8th August 2016: This problem has been completely worked out in a paper that was published in the January 2016 issue of American Journal of Physics.

Link: Am. J. Phys. 84, 21 (2016); http://dx.doi.org/10.1119/1.4933295

Still shot from magnet train video

Answer accepted (score 48)

That’s a nice video - good find :-)

If you run a current through a coil; it generates an magnetic field inside the coil like this:

Coil

(Image from the Hyperphysics site.)

If the field lines are exactly parallel a bar magnet will feel no net force. However at the ends of the coil, where the field lines diverge, a bar magnet will be either pulled into the coil or pushed out of the coil depending on which way round you insert it.

The trick in the video is that the magnets are made of a conducting material and they connect the battery terminals to the copper wire, so the battery, magnets and copper wire make a circuit that generates a magnet field just in the vicinity of the battery. The geometry means the two magnets are automatically at the ends of the generated magnetic field, where the field is divergent, so a force is exerted on the magnets.

The magnets have been carefully aligned so the force on both magnets points in the same direction, and the result is that the magnets and battery move. But as they move, the magnetic field moves with them and you get a constant motion.

If you flipped round the two magnets at the ends of the battery the battery and magnets would move in the reverse direction. If you flipped only one magnet the two magnets would then be pulling/pushing in opposite directions and the battery wouldn’t move.

73: Why can I never see any stars in the night sky? (score 111875 in 2017)

Question

I have always lived near a large city. There is a stark contrast between the picture linked below for example, and what I see with the naked eye.

Sometimes I can see a few stars here and there, but usually they can be counted on my fingers. I’m looking up at the night sky right now, and there isn’t a single visible star!

Question: What creates this large gap in what I can see?

Here’s the photograph I was talking about: http://apod.nasa.gov/apod/ap120123.html

Answer accepted (score 21)

This is a simple and clear issue, with a unique answer. I see other replies mentioning weather conditions, dark adaptation and so on. That’s just so much hand waving, given that the first thing you said was “I’ve always lived in somewhat large cities”.

The core problem here, by a very wide margin, is light pollution if you live in a large city. This is the one factor, above everything else, that affects your ability to see the stars.

Here’s a light pollution map:

http://www.jshine.net/astronomy/dark_sky/

The white zones are the worst, and they are in the middle of the cities. Black zones are the best.

Here’s a somewhat better (but not perfect) comparison of a dark sky versus light polluted sky (your picture was taken with a very long exposure that doesn’t look very realistic):

enter image description here

The dome of light above the city is very visible if imaged from afar:

enter image description here

Long exposure pictures in cities will reveal the orange skyglow, which is the main reason why you can’t see the stars - it’s like noise masking off the faint light from the distant objects:

enter image description here

Light pollution affects primarily the observations of faint objects, such as nebulae or distant galaxies. Bright objects such as the Moon, the big planets, or some of the bright stars, are not affected by light pollution.

Using a telescope with a large aperture alleviates the effects of light pollution to some extent, but it cannot work miracles. A dark sky is always better.

Usually a 1 hour drive away from the city will bring you in a place with dark sky, free of light pollution - but it depends on several factors. In such a place you should be able to see the Milky Way with your naked eye. The Andromeda galaxy also is visible with the naked eye if the sky is dark enough.

Answer 2 (score 5)

A number of factors:

  • Weather conditions will obviously affect your ability to see the stars. Try on a perfectly clear night.
  • Let your eyes become adjusted to the dark for at least 30 minutes for optimal seeing.
  • Due to the nature of the human visual system, the rod cells concentrated in the outer parts of the eye (peripheral vision) are more sensitive to light than cone cells. This means that you can see stars more easily by not looking directly at them.
  • The image you linked to was obviously taken by a camera which is able to take in more light over an extended period of time than our eyes. The camera can capture more light which means it can see more stars.
  • If you really want to see the stars, get out of the city and find a nice dark area, lay down, and observe.

Answer 3 (score 3)

I live in a 5 million people conurbation in Germany and even there I can see stars. From my balcony. Heck, the light pollution isn’t even bad enough to /not/ see the ring nebula in an 8" reflector. So unless you live right under a neon sign, you should be able to see some stars.

In typical street illumination and under a clear sky you will still see around 5 - 10 of the brightest stars and planets (e.g. Rigel, Betelgeuse, Venus, Jupiter). If you manage to find a “dark” spot - like a back alley - you might see much more. On my balcony (again: 5 million people conurbation) I can see many constellations - also the darker ones. You could use Google Sky Map or similar software to find some stars.

What you will never see in a city, though, is the milky way as on the picture you quoted because it is not bright enough.

74: What would be the resistance of a wire, if it is stretched to double length? (score 111533 in 2015)

Question

If there is a wire of resistance \(R\) and we stretch it such that it becomes 2 times longer, then what should be the new resistance of the wire?

Answer accepted (score 2)

Resistance is directly proportional to the length of the wire, and inversely proportional to the cross sectional area of the wire.

R = pl/A, where R is resistance, p is the material’s resistance in ohms, l is the length, and A is the cross sectional area in m^2.

As a wire gets longer its resistance increases, and as it gets thinner its resistance also increases because its cross sectional area decreases.

Doubling the length will double the resistance, but the wire also must get thinner as it is stretched, because it will contain the same amount of metal in twice the length. The volume of a cylinder is length * cross sectional area, but in order to find the new cross section, you need to consider what the wire is made of. Most materials resist a change in volume more than they resist a change in shape, and because of that, they lose less volume than otherwise would be expected when stress is applied.

In the case of the stretched wire, its density will be less after stretching than it was before. This means that the volume of space occupied by the metal in the wire expands, due to the cross sectional area not shrinking in proportion to the stretch of its length. The amount of reduction in cross section is determined according to Poisson’s ratio (http://silver.neep.wisc.edu/~lakes/PoissonIntro.html)

Assuming the wire is made of copper, Poisson’s ratio is about .355, which is the ratio of strain in the cross sectional area to strain along the length of the stretched copper wire. Without considering Poisson’s ratio, one would expect the area of the cross section to be halved, and the final volume to remain constant, but this would be wrong.

Another consideration is the effect on the wire’s conductivity by cold-working the metal (assuming it is stretched without heating it). When the metal is put under strain, inter-atomic distances will change. This is called the piezo-electric effect, and it’s used in strain gauges. I saw one study of this effect in which the resistance in a thin copper wire doubled.

As the cross sectional area of the stretched wire is less that of the original wire, then in addition to doubling the resistance by doubling the length, you further increase the resistance by reducing the cross section. The piezo-electric effect yet further increases the resistance. Thus, there are three ways resistance in the stretched wire increases: (1) Doubling the length, (2) Reducing the cross sectional area according to Poisson’s Ratio, and (3) The piezo-electric effect.

75: Why is current a scalar quantity? (score 111081 in 2013)

Question

Current has both magnitude and direction. As per the definition of vector defined in encyclopedia, current should be a vector quantity. But, we know that current is a scalar quantity. What is the reason behind it?

Answer accepted (score 21)

To be precise, current is not a vector quantity. Although current has a specific direction and magnitude, it does not obey the law of vector addition. Let me show you.

enter image description here

Take a look at the above picture. According to Kirchhoff’s current law, the sum of the currents entering the junction should be equal to sum of the currents leaving the junction (no charge accumulation and discharges). So, a current of 10 A leaves the junction.

Now take a look at the picture below.

enter image description here

Here, I have considered current to be a vector quantity. The resultant current is less than that obtained in the previous situation. This result gives us a few implications and I would like to go through some of them. This could take place due to charge accumulation at some parts of the conductor. This could also take place due to charge leakage. In our daily routine, we use materials that are approximately ideal and so these phenomena can be neglected. In this case, the difference in the situations is distinguishable and we cannot neglect it.

If you are not convinced, let me tell you more. In the above description (current as a vector), I have talked about the difference in magnitudes alone. The direction of the resultant current (as shown) is subtle. That’s because in practical reality, we do not observe the current flowing along the direction shown above. You may argue that in the presence of the conductor, the electrons are restricted to move along the inside and hence it follows the available path. You may also argue that the electric field inside the conductor will impose a few restrictions. I appreciate the try but what if I remove the conductors? And I also incorporate particle accelerators that say shoot out proton beams thereby, neglecting the presence of an electric field in space.

Let me now consider two proton beams (currents), each carrying a current of 5 A as shown below. These beams are isolated and we shall not include any external influences.

enter image description here

Now that there is no restriction to the flow of protons, the protons meeting at the junction will exchange momentum and this will result in scattering (protons represented by small circles). You would have a situation where two beams give rise to several beams as shown below. Our vector addition law does not say this.

enter image description here

I have represented a few in the picture above. In reality, one will observe a chaotic motion. Representation of the beams (as shown right above) will become a very difficult task because the protons do not follow a fixed path. I have just shown you an unlikely, but a possible situation.

All this clearly tells us that current is not a vector quantity.

Another point I would like to mention is, current cannot be resolved into components unlike other vector quantities. Current flowing in a particular direction will always have an effect along the direction of flow alone over an infinite period of time (excluding external influences such as electric or magnetic fields).

Answer 2 (score 17)

I think there might be a contextual issue.

If you’re passing DC current in a circuit, it makes sense to treat it as a scalar, because it flows along the wires and you’re usually designing the circuit so that its components don’t appreciably interact in ways where the wire geometry matters… or rather treating ones that do as separate sub-units, e.g., inductors.

In other words, if your current is constrained to go in one dimension, such as along wires, then it makes sense for it to be treated as a scalar, because a \(1\)-dimensional vector is a scalar.

But as the case of inductors shows, the direction in space the current is flowing can make a lot of difference electromagnetically. As BMS suggested, more fundamentally the conservation of charge is expressed by a continuity equation \[\frac{\partial\rho}{\partial t} + \nabla\cdot\mathbf{J} = 0\text{,}\] where \(\rho\) and \(\mathbf{J}\) are charge and current densities, respectively, quantities that also appear in Maxwell’s equations.


Here’s some context that may make the original post clear. The force on a current-carrying wire … \(\vec{F}=I\vec{L}\times\vec{B},\) … As another example, the Biot-Savart law is \(d\vec{B}=\frac{\mu_o}{4\pi} \frac{Id\vec{l}\times\hat{r}}{r^2},\) where \(d\vec{l}\) is in the same direction as the “current” \(I\).

One can obviously rewrite that as \(L\vec{I}\times\vec{B}\) and \(dl(\vec{I}\times\vec{r})\) if one wished to think of a vector current. The only reason not to would be the fact that your amp-meter tells you directionality along a wire (hence \(\pm\)) rather than directionality in space, so the fact that it’s more convenient the other way.

Answer 3 (score 8)

Current doesn’t follow vector addition and decomposition law, and so it is not a vector quantity. Current density is a vector quantity.

76: Why are mirror images flipped horizontally but not vertically? (score 111077 in 2018)

Question

Why is it that when you look in the mirror left and right directions appear flipped, but not the up and down?

Answer accepted (score 256)

Here’s a video of physicist Richard Feynman discussing this question.

Imagine a blue dot and a red dot. They are in front of you, and the blue dot is on the right. Behind them is a mirror, and you can see their image in the mirror. The image of the blue dot is still on the right in the mirror.

What’s different is that in the mirror, there’s also a reflection of you. From that reflection’s point of view, the blue dot is on the left.

What the mirror really does is flip the order of things in the direction perpendicular to its surface. Going on a line from behind you to in front of you, the order in real space is

  1. Your back
  2. Your front
  3. Dots
  4. Mirror

The order in the image space is

  1. Mirror
  2. Dots
  3. Your front
  4. Your back

Although left and right are not reversed, the blue dot, which in reality is lined up with your right eye, is lined up with your left eye in the image.

The key is that you are roughly left/right symmetric. The eye the blue dot is lined up with is still your right eye, even in the image. Imagine instead that Two-Face was looking in the mirror. (This is a fictional character whose left and right side of his face look different. His image on Wikipedia looks like this:)

enter image description here

If two-face looked in the mirror, he would instantly see that it was not himself looking back! If he had an identical twin and looked right at the identical twin, the “normal” sides of their face would be opposite each other. Two-face’s good side is the right. When he looked at his twin, the twin’s good side would be to the original two-face’s left.

Instead, the mirror Two-face’s good side is also to the right. Here is an illustration:

enter image description here

enter image description here

So two-face would not be confused by the dots. If the blue dot is lined up with Two-Face’s good side, it is still lined up with his good side in the mirror. Here it is with the dots:

enter image description here

Two-face would recognize that left and right haven’t been flipped so much as forward and backward, creating a different version of himself that cannot be rotated around to fit on top the original.

Answer 2 (score 74)

Because they don’t flip left with right (or up with down), they flip the 3D space you’re standing in “inside out”, so far from the mirror becomes far away inside the mirror and vice versa. A hand 1 meter from the mirror seems like it’s 1 meter on the other side of the mirror but in the same spot with regards to left/right so nothing is flipped.

Wiggle your left hand - you’ll see the hand which is to the left in the mirror wiggle. Wiggle your toes and the toes in the mirror image wiggle etc.

Answer 3 (score 60)

This common confusion stems from our familiarity with photographs. We forget that we rotate them to face ourselves.

Take a picture of yourself and hold it up in front of you. Probably you are holding it so that you can see your image. If so, you “flipped” the image of yourself when you rotated it 180 degrees around the vertical axis. When you look to the left side of the photo, you are looking over the right shoulder of your image. These directions are flipped!

Now look in a mirror. When you look to the left, you are looking over the left shoulder of your image. These directions are not flipped!

Now pick up the picture again and turn it so it’s facing the same direction you are facing. You have removed the 180 degree rotation so that you and your image are “looking” in the same direction. The left side of your image is again to your left. If the picture is transparent enough that you can see your image, you’ll see not the back of your head, but your eyes, giving you the impression that you’re looking back at yourself. A mirror image! But again, left and right are not flipped.

When you say the mirror “flips” left and right, you are speaking from the frame of reference of one who is used to the 180 degree rotation that you apply to view an opaque photograph. But that’s what we all do because we consider photographs, rotated 180 degrees to face ourselves, as being the “correct” left-right orientation.

What a mirror really flips is the depth dimension. That which is behind you appears to be in front of you.

77: Power dissipated by resistors in series versus in parallel for fixed voltage (score 110624 in 2018)

Question

Do two resistors in parallel dissipate more power for a fixed applied voltage compared to the same two resistors in series?

Answer accepted (score 11)

The most straightforward way to reason about this doesn’t require much math.

The power delivered by the voltage source to either pair of resistors is inversely proportion to their combined resistance, i.e, if the combined resistance is greater, the power delivered is smaller.

\[p_R = \dfrac{V^2}{R}\]

Now, recall that:

  • the series combination of two resistances is always greater than either individual resistance

  • the parallel combination of two resistances is always less than than resistance of either individual resistance


For example, suppose that both resistors have the same value of resistance \(R\).

Now, if the two resistors are connected in series, the equivalent resistance is \(R_{EQ}=2R\).

But, if the two resistors are connected in parallel, the equivalent resistance is \(R_{EQ}=\dfrac{R}{2}\).

Thus, the power for the series combination is:

\[p_{series} = \dfrac{1}{2}\dfrac{V^2}{R} \]

Whilst the power for the parallel combination is:

\[p_{parallel} = 2\dfrac{V^2}{R}\]

In this case, the parallel combination dissipates 4 times the power of the series combination.

Answer 2 (score 2)

Power depends on voltage across the circuit and resistance of the circuit

\[\begin{equation} P = \frac{V^2}{R};\\ P_{series} = \frac{V^2}{(R_1+R_2)};\\ P_{parallel} = \frac{V^2}{(R_1^{-1}+R_2^{-1})^{-1}}=\frac{V^2}{\frac{R_1R_2}{R_1+R_2}}=\frac{V^2(R_1+R_2)}{R_1R_2};\\ \frac{P_{series}}{P_{parallel}} = \frac{R_1R_2}{(R_1+R_2)^2} \end{equation}\]

Since \(R_1\) and \(R_2\) are always positive, \(R_1R_2 &lt; (R_1+R_2)^2\) i.e. \(P_{series} &lt; P_{parallel}\)

Answer 3 (score 1)

for same voltage supply, the power consumed by two resistances in series connection is less in compare to power consumed by same resistances in parallel connection. Therefore we can say that - P(series) < p(parallel)

78: Block on a block problem, with friction (score 110516 in 2013)

Question

Consider two blocks, one on top of the other on a frictionless table, with masses \(m_1\) and \(m_2\) respectively. There is appreciable friction between the blocks, with coefficients \(\mu_s\) and \(\mu_k\) for static and kinetic respectively. I’m considering the fairly routine problem of determining the maximum horizontal force \(F\) (say, to the right) that can be applied to the top block so that the two blocks accelerate together.

The problem is not hard to solve symbolically. If the two blocks move together, their accelerations are the same, and the top block doesn’t move with respect to the bottom block, so only static friction is in play. In a standard coordinate system (with \(x\) oriented to the right), the sum of horizontal forces for the top block is

\[F-F_{sf}=m_1a\]

and for the bottom block

\[F_{sf}=m_2a\]

where \(F_{sf}\) is the force of static friction. Solving for \(a\) in these two expressions, and then equating them, gives

\[F=\frac{(m_1+m_2)F_{sf}}{m_2}\]

The maximum such force will therefore be achieved when \(F_{sf}\) is maxed out at \(\mu_s m_1g\), so

\[F_{max}=\frac{m_1}{m_2}\mu_s(m_1+m_2)g\]

I understand this solution, but conceptually I don’t have a response to the following nagging question: \(F_{max}\) is clearly larger than the max static friction force \(\mu_sm_1g\) (because \(\frac{m_1+m_2}{m_2}&gt;1\)), so why doesn’t the application of a force of magnitude \(F_{max}\) to the top block cause kinetic friction to take over? This line of reasoning would suggest that applying a force \(F\) of magnitude greater than \(\mu_sm_1g\) would cause the top block to start moving with respect to the bottom block (in which case the blocks no longer accelerate together, as in the above solution). I’m at a loss, conceptually, to say what’s wrong here. I suspect it has something to do with being careful about reference frames, but a clear explanation would be much appreciated.

Answer accepted (score 6)

The key is that the bottom block is actually moving and is not held fixed like the ground typically is (here I am asusming \(F\) is applied to the top block).

Elaborating on my comment: Your acceleration “\(a\)” is with respect to the ground. The equation \(F-F_{sf}=m_1a\) shows that the reason you can accelerate is because \(F_{max}&gt;F_{sf}\), and this is accelerating with respect to ground, not with respect to the bottom block.

In particular, if you held the bottom block in place (treating it as the ground), then yes kinetic friction would kick in, but now your equations would be different (because \(a_{bottom}=0\) and \(a_{top}&gt;0\)).

Answer 2 (score 4)

You can also think of your problem in a slightly different way, which may (or may not) help you see things in a different light.

  1. You apply your force \(F\), and it accelerates the top block at a rate \(a_1\).
  2. You can consider the acceleration as an inertial force of \(-m_1\cdot a_1\), so there is a surplus force \(F-m_1\cdot a_1\) left over.
  3. This surplus force will be used to accelerate the second block at a rate \(a_2\).
  4. Since the table is frictionless, there can be no surplus, and all the force must be consumed in accelerating the block, so \(F-m_1\cdot a_1-m_2\cdot a_2 = 0\).

So far everything holds no matter what the nature of the interfacial contact between the blocks, which we must introduce to solve for \(a_1\) and \(a_2\):

  1. If both blocks move together, then \(a_1 = a_2 = a\), and the inequality for static friction applies, \(F - m_1\cdot a \leq m_1 g \mu_s\). This, together with the equation of (4) above gives you an acceleration of \(a=F/(m_1+m_2)\) for forces \(F \leq \frac{m_1}{m_2} (m_1+m_2) g \mu_s\).
  2. If \(a_1 \neq a_2\) then we have relative motion, and then we know what the interfacial force is exactly, and we can calculate \(a_1 = F / m_1 - g \mu_k\) and \(a_2 = g \mu_k m_1 /m_2\).

But, to answer your question, why doesn’t a force larger than the maximum static friction applied to the top block cause slipping, and thus dynamic friction? Because part of the force is used up in accelerating the top block, and it is only the surplus that is available to try to overcome friction between the blocks.

Answer 3 (score 1)

I would approach the problem a bit differently. Firstly, there is no relative motion between the two blocks, and hence the force at play is the static force only. i.e. \(F_{sf}\). This force accelerates the bottom mass, \(m_2\). So, \(F_{sf}=m_2.a\)….. eq.1 Now, consider the top mass m1 to be in an accelerated frame of reference, which has an acceleration \(a\). The forces acting on \(m_1\) are the following 1. \(F_{Max}\) - i.e. the pull force to the right (say) 2. \(F_{sf}\) acting to the left 3. A pseudo force equal to m1.a acting to the left, due to the mass \(m_1\) being in an accelerated frame. Since the body is at rest in this reference frame, these forces neutralize each other. Thus, $F_{max} = F_{sf} + m_1.a $ …..eq 2 equations 1 and 2 are the equations that you used to solve your problem

79: What is the physical significance of dot & cross product of vectors? Why is division not defined for vectors? (score 110391 in 2019)

Question

I get the physical significance of vector addition & subtraction. But I don’t understand what do dot & cross products mean?

More specifically,

  • Why is it that dot product of vectors \(\vec{A}\) and \(\vec{B}\) is defined as \(AB\cos\theta\)?
  • Why is it that cross product of vectors \(\vec{A}\) and \(\vec{B}\) is defined as \(AB\sin\theta\), times a unit vector determined from the right-hand rule?

To me, both these formulae seem to be arbitrarily defined (although, I know that it definitely wouldn’t be the case).

If the cross product could be defined arbitrarily, why can’t we define division of vectors? What’s wrong with that? Why can’t vectors be divided?

Answer accepted (score 51)

I get the physical significance of vector addition & subtraction. But I don’t understand what do dot & cross products mean?

Perhaps you would find the geometric interpretations of the dot and cross products more intuitive:

The dot product of A and B is the length of the projection of A onto B multiplied by the length of B (or the other way around–it’s commutative).

The magnitude of the cross product is the area of the parallelogram with two sides A and B. The orientation of the cross product is orthogonal to the plane containing this parallelogram.

Why can’t vectors be divided?

How would you define the inverse of a vector such that \(\mathbf{v} \times \mathbf{v}^{-1} = \mathbf{1}\)? What would be the “identity vector” \(\mathbf{1}\)?

In fact, the answer is sometimes you can. In particular, in two dimensions, you can make a correspondence between vectors and complex numbers, where the real and imaginary parts of the complex number give the (x,y) coordinates of the vector. Division is well-defined for the complex numbers.

The cross-product only exists in 3D.

Division is defined in some higher-dimensional spaces too (such as the quaternions), but only if you give up commutativity and/or associativity.


Here’s an illustration of the geometric meanings of dot and cross product, from the wikipedia article for dot product and wikipedia article for cross product:

enter image description here enter image description here

Answer 2 (score 42)

The best way is to ignore the garbage authors put in elementary physics books, and define it with tensors. A tensor is an object which transforms as a product of vectors under rotations. Equivalently, it can be defined by linear functions of (sets of vectors) and (linear functions of sets of vectors), all this is described on Wikipedia.

There are exactly two tensors which are invariant under rotations:

\(\delta_{ij}\) and \(\epsilon_{ijk}\)

All other tensors which are invariant under rotations are products and tensor traces of these. These tensors define the “dot product” and “cross product”, neither of which is a good notion of product:

\(V \cdot U = V^i U^j \delta_{ij}\)

and cross product

\((V \times U)_k = V^i U^j \epsilon_{ijk}\)

It is pointless to try to think of the cross product as a “product”, because it is not associative, \((A\times B)\times C\) does not equal \(A\times(B\times C)\). It is also less than useful to think of the dot-product as a product in the usual sense, because it takes pairs of vectors to numbers, and \((A\cdot B)C\) does not equal \(A(B\cdot C)\), because the first points in the C direction, and the second points in the A direction.

The best way is to get used to the invariant tensors. These generalize to arbitrary dimensions, and they are much clearer, and do not require a right-hand rule (this is taken care of by the index order convention). You will not find a single physics paper which uses the cross product, with the single exception of Feynman’s 1981 paper “the qualitative behavior of Yang-Mills theory in 2+1 dimensions”, and even if you do, it is trivial to translate.

Answer 3 (score 16)

You can divide vectors with clifford (“geometric”) algebra.

The geometric product of vectors is associative:

\[abc = (ab)c = a(bc)\]

And the geometric product of a vector with itself is a scalar.

\[aa = |a|^2\]

These are all the properties required to define a unique product of vectors. All other properties can be derived. I’ll sum them up, however: for two vectors, the geometric product marries the dot and cross products.

\[ab = a \cdot b + a \wedge b\]

We use wedges instead of crosses because this second term is not a vector. We call it a bivector, and it represents an oriented plane. It can be instructive to introduce a basis to see this. \(e_1 e_1 = e_2 e_2 = 1\) and \(e_1 e_2 = -e_2 e_1\) capture the geometric product’s properties for these orthonormal basis vectors. The geometric product is then,

\[ab = (a^1 e_1 + a^2 e_2) (b^1 e_1 + b^2 e_2) = (a^1 b^1 + a^2 b^2) + (a^1 b^2 - a^2 b^1) e_1 e_2\]

As I said, the geometric product of two vectors is invertible in Euclidean space. This is obvious from the associativity property: \(a b b^{-1} = a(b b^{-1}) = a\). That \(b b^{-1} = 1\) implies that

\[b^{-1} = b/|b|^2\]

It’s informative to look at the quantity \(a = (a b) b^{-1}\), using the grouping to decompose it a different way.

\[a = (ab)b^{-1} = (a \cdot b) b^{-1} + (a \wedge b) \cdot b^{-1}\]

The first term is in the direction of \(b\), the second is orthogonal to \(b\). This decomposes \(a\) into \(a_\parallel\) and \(a_\perp\).

What others have said is right, you can’t define just the vector cross product to be invertible. This decomposition should convince you–you cannot fully reconstruct a vector without information from both the dot and cross products. And as has been said, this product is not commutative.

80: What is the difference between center of mass and center of gravity? (score 109593 in 2013)

Question

What is the difference between center of mass and center of gravity?

These terms seem to be used interchangeably.

Is there a difference between them for non-moving object on Earth, or moving objects for that matter?

Answer accepted (score 18)

The difference is that the centre of mass is the weighted average of location with respect to mass, whereas the centre of gravity is the weighted average of location with respect to mass times local \(g\). If \(g\) cannot be assumed constant over the whole of the body (perhaps because the body is very tall), they might (and generally will) have different values.

I don’t see an immediate connection with movement though.

Answer 2 (score 1)

Centre of mass & gravity coincides until they have unifrom gravitational field. The time uniform gravitational field is lost we rather consider centre of mass than centre of gravity. However, they both’re interchangeable.

Answer 3 (score -2)

Center of mass is actually the integral of mass density. Meanwhile, center of gravity is the integral of force of gravity!

81: Why doesn’t the frequency of light change during refraction? (score 108644 in 2016)

Question

When light passes from one medium to another its velocity and wavelength change. Why doesn’t frequency change in this phenomenon?

Answer accepted (score 60)

The electric and magnetic fields have to remain continuous at the refractive index boundary. If the frequency changed, the light at each side of the boundary would be continuously changing its relative phase and there would be no way to match the fields.

Answer 2 (score 23)

Think of it like this: At the boundary/interface of the medium, the number of waves you send is the number of waves you receive, at the other side, almost instantly. Frequency doesn’t change because it depends on travelling of waves across the interface.

But speed and wavelength change as the material on the other side may be different, so now it might have a longer/shorter size of wave and so the number of waves per unit time changes.

enter image description here

Answer 3 (score 18)

Here is the bookwork answer.

Consider a boundary between two media to be the plane \(y=0\). Draw a rectangular loop of side \(\delta x\) and \(\delta y\). Have an E-field either side of the boundary that is parallel to the boundary in the \(x\) direction. The E-field is \(E_1\) in medium 1 and \(E_2\) in medium 2.

Now use the integral form of Faraday’s law. \[ \oint {\bf E} \cdot d{\bf l} = - \int \frac{\partial {\bf B}}{\partial t} \cdot d{\bf S}\] \[ E_1 \delta x - E_2 \delta x = -\frac{\partial {\bf B}}{\partial t} \delta x \delta y.\] But now you can let \(\delta y\) shrink to zero and you find that \(E_2 = E_1\). i.e. the component of E-field that is parallel to the interface must be the same immediately either side of the boundary.

Now have the boundary be defined by the plane \(y=0\), the point of incidence be \({\bf r}=0\) and have an incident wave approach it of the form \(E = E_i \exp[i({\omega_i t - \bf k_i}\cdot {\bf r})] \hat{\bf k}\times \hat{\bf r}\), where \(\hat{\bf k}\) is a unit vector in the direction of the wave-vector \({\bf k_i}\), and \(\omega_i\) is the angular frequency.

The incident wave impacts at \({\bf r}=0\) and some of the light is transmitted and some reflected. The incident, reflected and transmitted rays are all in the same plane and because, as shown above, the parallel components must be the same either side of the boundary we can write. \[E_i \exp(i\omega_i t) \cos \theta_i + E_r \exp(i\omega_r t)\cos \theta_r = E_t \exp(i\omega_t t)\cos\theta_t,\] where \(\theta_i\) etc are the angles of incidence, reflection, transmission; and \(\omega_r\) and \(\omega_t\) are the frequencies of the reflected and transmitted waves.

But this relationship has to be true for all values of \(t\). The only way this can be arranged is if \(\omega_i = \omega_r = \omega_t\). So the frequency of the light is unchanged as it passes into the medium.

I have taken a shortcut here to get to the required result. Usually, when doing this proof you define a geometry so that the wave hits at various points along the interface and then this means that the arguments of the exponentials look like \((\omega_i t -k_i x\sin\theta_i)\), \((\omega_r t -k_rx\sin\theta_r)\) and \((\omega_t t -k_tx\sin\theta_t)\), where \(x\) is a coordinate along the boundary. Demanding that these arguments are equal for all \(x,t\) also gives you the law of reflection (\(\theta_i = \theta_r\)) and Snell’s refraction law; \(\sin \theta_t/\sin\theta_i = k_i/k_t\), and if \(\omega_t = \omega_i\) and \(\omega/k = c/n\), then \(\sin \theta_t/\sin\theta_i = n_i/n_t\).

82: How does one prove that Energy = Voltage x Charge? (score 108512 in 2018)

Question

We know \[E = q V\] where \(E\) is the energy (in Joules), \(V\) is the potential difference (in Volts), and \(q\) is the charge. Why is this equation true and how we prove it?

Answer accepted (score 13)

There are various ways to decide which of the assumptions are primary and which of them are their consequences but \(E=VQ\) may be most naturally interpreted as the definition of the potential.

The potential energy is a form of energy and the potential (and therefore voltage, when differences are taken) is defined as the potential energy (or potential energy difference) per unit charge, \(V = E/Q\). That’s equivalent to your equation. The potential energy is proportional to the charge essentially because of the linearity of Maxwell’s equations (the superposition principle). Once we know about the proportionality, we must just give a name to the proportionality factor between \(E\) and \(Q\) and we simply call it potential (or voltage).

Answer 2 (score 4)

There are several (equivalent) ways to look at this.

One is to say that for any conservative force \(\mathbf{F}\), one can define the potential energy Ep as an associated potential field such as \(\mathbf{F}=-\frac{\partial Ep}{\partial r}\), or maybe more formally \(\mathbf{F}=-\nabla(Ep)\). That’s no more than a definition of the potential energy. (Electrostatic forces and gravitational forces have that in common that they are conservative and an associated potential function exists).

At the same time electrostatic forces are (experimentally) observed to be \(\mathbf{F} = q\mathbf{E}\).

Last definition is the electric potential field U : it is also defined as \(\mathbf{E}=-\frac{\partial U}{\partial r}\), or maybe more formally \(\mathbf{E}=-\nabla(U)\).

When one puts all these together, Ep = qU.

I am not sure if this is a proof though, but maybe more the consequence of various definitions of useful quantities and concepts in physics.

Answer 3 (score -2)

electric field strength is \[E=\frac Fq=\frac Vd\] with \(V\)=voltage, \(d\)=distance between charged plates \[\begin{align} \frac Fq&amp;=\frac Vd \\ Fd&amp;=qV \end{align}\] but \(Fd\)=energy
\[\therefore {\rm energy}=qV\]

83: Why is Earth’s gravity stronger at the poles? (score 108234 in 2016)

Question

Many sources state that the Earth’s gravity is stronger at the poles than the equator for two reasons:

  1. The centrifugal “force” cancels out the gravity minimally, more so at the equator than at the poles.
  2. The poles are closer to the center due to the equatorial bulge, and thus have a stronger gravitational field.

I understood the first point, but not the second one.
Shouldn’t the gravity at the equator be greater as there is more mass pulling a body perpendicular to the tangent as more mass is aligned along this axis?

Answer accepted (score 13)

The point is that if we approximate Earth with an oblate ellipsoid, then the surface of Earth is an equipotential surface,\(^1\) see e.g. this Phys.SE post.

Now, because the polar radius is smaller than the equatorial radius, the density of equipotential surfaces at the poles must be bigger than at the equator.

Or equivalently, the field strength\(^2\) \(g\) at the poles must be bigger than at the equator.

\(^1\) Note that the potential here refers to the combined effect of gravitational and centrifugal forces. If we pour a bit of water on an equipotential surface, there would not be a preferred flow direction.

\(^2\) Similarly, the field strength, known as little \(g\), refers to the combined effect of gravitational and centrifugal forces, even if \(g\) is often (casually and somewhat misleading) referred to as the gravitational constant on the surface of Earth.

Answer 2 (score 10)

Lots of places state that the Earth’s gravity is stronger at the poles than the equator for two reasons:

  1. The centrifugal force cancels out the gravity minimally, more so at the equator than at the poles.
  2. The poles are closer to the center due to the equatorial bulge, and thus have a stronger gravitational field.

TL;DR version: There are three reasons. In order of magnitude,

  1. The poles are closer to the center of the Earth due to the equatorial bulge. This strengthens gravitation at the poles and weakens it at the equator.

  2. The equatorial bulge modifies how the Earth the gravitates. This weakens gravitation at the poles and strengthens it at the equator.

  3. The Earth is rotating, so an Earth-bound observer sees a centrifugal force. This has no effect at the poles and weakens gravitation at the equator.


Let’s see how the two explanations in the question compare to observation. The following table compares what a spherical gravity model less centrifugal acceleration predicts for gravitational acceleration at sea level at the equator (\(g_{\text{eq}}\)) and the north pole (\(g_{\text{p}}\)) versus the values computed using the well-established Somigliana gravity formula \(g = g_{\text{eq}}(1+\kappa \sin^2\lambda)/\sqrt{1-e^2\sin^2 \lambda}\).

\(\begin{matrix} \text{Quantity} &amp; GM/r^2 &amp; r\omega^2 &amp; \text{Total} &amp; \text{Somigliana} &amp; \text{Error} \\ g_\text{eq} &amp; 9.79828 &amp; -0.03392 &amp; 9.76436 &amp; 9.78033 &amp; -0.01596 \\ g_\text{p} &amp; 9.86431 &amp; 0 &amp; 9.86431 &amp; 9.83219 &amp; \phantom{-}0.03213 \\ g_\text{p} - g_\text{eq} &amp; 0.06604 &amp; \phantom{-}0.03392 &amp; 0.09995 &amp; 0.05186 &amp; \phantom{-}0.04809 \end{matrix}\)

This simple model works in a qualitative sense. It shows that gravitation at the north pole is higher than at the equator. Quantitatively, this simple model is not very good. It considerably overstates the difference between gravitation at the north pole versus the equator, almost by a factor of two.

The problem is that this simple model does not account for the gravitational influence of the equatorial bulge. A simple way to think of that bulge is that it adds positive mass at the equator but adds negative mass at the poles, for a zero net change in mass. The negative mass at the pole will reduce gravitation in the vicinity of the pole, while the positive mass at the equator will increase equatorial gravitation. That’s exactly what the doctor ordered.

Mathematically, what that moving around of masses does is to create a quadrupole moment in the Earth’s gravity field. Without going into the details of spherical harmonics, this adds a term equal to \(3 J_2 \frac {GMa^2}{r^4}\left(\frac 3 2 \cos^2 \lambda - 1\right)\) to the gravitational force, where \(\lambda\) is the geocentric latitude and \(J_2\) is the Earth’s second dynamic form. Adding this quadrupole term to the above table yields the following:

\(\begin{matrix} \text{Quantity} &amp; GM/r^2 &amp; r\omega^2 &amp; J_2\,\text{term} &amp; \text{Total} &amp; \text{Somigliana} &amp; \text{Error} \\ g_\text{eq} &amp; 9.79828 &amp; -0.03392 &amp; \phantom{-}0.01591 &amp; 9.78027 &amp; 9.78033 &amp; -0.00005 \\ g_\text{p} &amp; 9.86431 &amp; 0 &amp; -0.03225 &amp; 9.83206 &amp; 9.83219 &amp; -0.00013 \\ g_\text{p} - g_\text{eq} &amp; 0.06604 &amp; \phantom{-}0.03392 &amp; -0.04817 &amp; 0.05179 &amp; 0.05186 &amp; -0.00007 \end{matrix}\)

This simple addition of the quadrupole now makes for a very nice match.


The numbers I used in the above:

  • \(\mu_E = 398600.0982\,\text{km}^3/\text{s}^2\), the Earth’s gravitational parameter less the atmospheric contribution.

  • \(R_\text{eq} = 6378.13672\,\text{km}\), the Earth’s equatorial radius (mean tide value).

  • \(1/f = 298.25231\), the Earth’s flattening (mean tide value).

  • \(\omega = 7.292115855 \times 10^{-5}\,\text{rad}/\text{s}\), the Earth’s rotation rate.

  • \(J_2 = 0.0010826359\), the Earth’s second dynamic form factor.

  • \(g_{\text{eq}} = 9.7803267714\,\text{m}/\text{s}^2\), gravitation at sea level at the equator.

  • \(\kappa = 0.00193185138639\), which reflects the observed difference between gravitation at the equator versus the poles.

  • \(e^2 = 0.00669437999013\), the square of the eccentricity of the figure of the Earth.

These values are mostly from Groten, “Fundamental parameters and current (2004) best estimates of the parameters of common relevance to astronomy, geodesy, and geodynamics.” Journal of Geodesy, 77:10-11 724-797 (2004), with the standard gravitational parameter modified to exclude the mass of the atmosphere. The Earth’s atmosphere has a gravitational effect on the Moon and on satellites, but not so much on people standing on the surface of the Earth.

Answer 3 (score 3)

Here’s a simple argument that doesn’t require any knowledge of fancy stuff like equipotentials or rotating frames of reference. Imagine that we could gradually spin the earth faster and faster. Eventually it would fly apart. At the moment when it started to fly apart, what would be happening would be that the portions of the earth at the equator would at orbital velocity. When you’re in orbit, you experience apparent weightlessness, just like the astronauts on the space station.

So at a point on the equator, the apparent acceleration of gravity \(g\) (i.e., what you measure in a laboratory fixed to the earth’s surface) goes down to zero when the earth spins fast enough. By interpolation, we expect that the effect of the actual spin should be to decrease \(g\) at the equator, relative to the value it would have if the earth didn’t spin.

Note that this argument automatically takes into account the distortion of the earth away from sphericity. The oblate shape is just part of the interpolation between sphericity and break-up.

It’s different at the poles. No matter how fast you spin the earth, a portion of the earth at the north pole will never be in orbit. The value of \(g\) will change because of the change in the earth’s shape, but that effect must be relatively weak, because it can never lead to break-up.

84: Why won’t my door close in the winter? (score 107920 in )

Question

Please, take this question seriously, because this is a real problem to me. I have a door in my flat. A closet door, to be specific. And there is a problem with it.

In the summer everything is ok, the standard wood door opens and closes as predicted, but in the winter time it drives me crazy! It just won’t close! Seems that it doesn’t fit where it belongs. Damn door just doesn’t allow me to welcome guests and take a cup of mulled wine with them.

So the question is, what physical processes are standing behind that unfitting? I believe that sun and gravity may be involved, but I don’t really understand it. So, please, help. If I’ll know the problem, I’ll figure out a solution.

P.S Maybe it is important that I live in Kazan’, Russia

Answer accepted (score 15)

I would say it has to do with humidity, since it is a wooden door. It probably gets too humid during the winter and the wood expands. If it had to do with temperature, it would be the opposite effect (it would expand during the summer when it is hot).

Answer 2 (score 2)

It’s actually just that in winters or around that time water doesn’t evaporate that fast. So, the cellulose in the wood expands due to the added water content and swells until it doesn’t fit the door frame. (1)

It’s really interesting just how beautifully everything plays out in nature. If you think about it that property probably saved a lot of trees from dying. If wood expands in moisture that means the tree will be taller in just the right conditions. It contracts that means the tree will be shorter in just dry conditions. That, probably, reduces the surface area of evaporation and helps the tree to survive. I might be wrong, but it’s mind boggling just how awesome nature is. She rocks. :D

  1. Cellulose is a polymer and I guess the expansion might have something to do with hydrogen bonding interacting with it’s structure.

Answer 3 (score 2)

It’s actually just that in winters or around that time water doesn’t evaporate that fast. So, the cellulose in the wood expands due to the added water content and swells until it doesn’t fit the door frame. (1)

It’s really interesting just how beautifully everything plays out in nature. If you think about it that property probably saved a lot of trees from dying. If wood expands in moisture that means the tree will be taller in just the right conditions. It contracts that means the tree will be shorter in just dry conditions. That, probably, reduces the surface area of evaporation and helps the tree to survive. I might be wrong, but it’s mind boggling just how awesome nature is. She rocks. :D

  1. Cellulose is a polymer and I guess the expansion might have something to do with hydrogen bonding interacting with it’s structure.

85: How to find the phase constant? (score 107333 in )

Question

I was given this velocity-vs-time graph of a particle in simple harmonic motion:

enter image description here

I determined the amplitude to be \(A = 1.15\) m, which Mastering Physics confirmed is correct.

Then I was asked to find the phase constant. I tried a few things but finally gave up and asked Mastering Physics for the answer, which is: \(\phi_0=2.62\) rad.

Since I had this equation in my notes,

\[v_x(t)=-\omega A sin(\omega t+\phi_0)=-v_\max sin(\omega t+\phi_0)\]

First I found the angular frequency \(\omega\):

\[\omega = {2\pi \over T}={2\pi \over 12 s}=0.523\]

Multiplied by \(A=1.15\), it matches the \(v_{max}\) of 0.60 m/s shown in the graph.

Then I tried this, using \(t\) = 0 s:

\[v_x(t)=-\omega A sin(\omega t+\phi_0)\] \[-0.30=(-0.523)(1.15)sin(\omega (0)+\phi_0)\] \[-0.30=(-0.60)sin(\phi_0)\] \[{-0.30 \over -0.60}=sin(\phi_0)\] \[0.5=sin(\phi_0)\] \[sin^{-1}(0.5)=\phi_0\] \[0.523=\phi_0\]

But that just gave me the angular frequency value, not the correct answer (which is 2.62 rad).

Did I do something wrong in the calculation, or am I confusing units? My calculator was on radians mode.

Note: My algebra skills are pretty weak, so I’m thinking that’s where the mistake might be… ?

Answer accepted (score 1)

You need to be careful about what exactly the inverse sine function is doing. If arcsin is given input x, it returns the angle, y, that sin(y) would have produced.

If you consider \(\sin(x)\):

Sine(x)

You’ll see that \[ \sin(0.523) \approx 0.5 \\ \sin(2.62) \approx 0.5 \\ \sin(6.81) \approx 0.5 \\... \]

The inverse sine function doesn’t just return a single value (although most calculators will only show one). It returns an infinitely large set of discrete values.

Now as far as why the problem probably wanted the 2.62 answer has to do with assumptions on the original displacement wave function. Generally, the equation for the displacement and velocity are of the form \[ x(t)=A \cos(\omega t + \phi)\\ \frac{dx}{dt}=v(t)=-\omega A \sin(\omega t + \phi) \] Below, I’ve generated plots of these functions, where \(A=1\), \(\omega=1\), and \(\phi=0\). You’ll see that the “unshifted” functional waveform of the velocity function is similar in shape to a -sin(x) function.

If you take a look at your original, you’ll see that shifting it left by 0.523 would give a graph that looks similar to sin(x), while shifting it left by the correct answer, 2.62, would give you a graph that looks similar to a -sin(x) plot (and similar to what the “unshifted” velocity function looks like).

x(t)enter image description here

86: What is the electric field in a parallel plate capacitor? (score 106917 in 2016)

Question

When we find the electric field between the plates of a parallel plate capacitor we assume that the electric field from both plates is \[{\bf E}=\frac{\sigma}{2\epsilon_0}\hat{n.}\] The factor of two in the denominator comes from the fact that there is a surface charge density on both sides of the (very thin) plates. This result can be obtained easily for each plate. Therefore when we put them together the net field between the plates is \[{\bf E}=\frac{\sigma}{\epsilon_0}\hat{n}\] and zero everywhere else. Here, \(\sigma\) is the surface charge density on a single side of the plate, or \(Q/2A\), since half the charge will be on each side.

But in a real capacitor the plates are conducting, and the surface charge density will change on each plate when the other plate is brought closer to it. That is, in the limit that the two plates get brought closer together, all of the charge of each plate must be on a single side. If we let \(d\) denote the distance between the plates, then we must have \[\lim_{d \rightarrow 0}{\bf E}=\frac{2\sigma}{\epsilon_0}\hat{n}\] which disagrees with the above equation. Where is the mistake in this reasoning?

Or more likely, do our textbook authors commonly assume that we are in this limit, and that this is why the conductor behaves like a perfectly thin charged sheet?

Answer accepted (score 18)

When discussing an ideal parallel-plate capacitor, \(\sigma\) usually denotes the area charge density of the plate as a whole - that is, the total charge on the plate divided by the area of the plate. There is not one \(\sigma\) for the inside surface and a separate \(\sigma\) for the outside surface. Or rather, there is, but the \(\sigma\) used in textbooks takes into account all the charge on both these surfaces, so it is the sum of the two charge densities.

\[\sigma = \frac{Q}{A} = \sigma_\text{inside} + \sigma_\text{outside}\]

With this definition, the equation we get from Gauss’s law is

\[E_\text{inside} + E_\text{outside} = \frac{\sigma}{\epsilon_0}\]

where “inside” and “outside” designate the regions on opposite sides of the plate. For an isolated plate, \(E_\text{inside} = E_\text{outside}\) and thus the electric field is everywhere \(\frac{\sigma}{2\epsilon_0}\).

Now, if another, oppositely charge plate is brought nearby to form a parallel plate capacitor, the electric field in the outside region (A in the images below) will fall to essentially zero, and that means

\[E_\text{inside} = \frac{\sigma}{\epsilon_0}\]

There are two ways to explain this:

  • The simple explanation is that in the outside region, the electric fields from the two plates cancel out. This explanation, which is often presented in introductory textbooks, assumes that the internal structure of the plates can be ignored (i.e. infinitely thin plates) and exploits the principle of superposition.

    electric fields in superposition
  • The more realistic explanation is that essentially all of the charge on each plate migrates to the inside surface. This charge, of area density \(\sigma\), is producing an electric field in only one direction, which will accordingly have strength \(\frac{\sigma}{\epsilon_0}\). But when using this explanation, you do not also superpose the electric field produced by charge on the inside surface of the other plate. Those other charges are the terminators for the same electric field lines produced by the charges on this plate; they’re not producing a separate contribution to the electric field of their own.

    electric field from one plate to the other

Either way, it’s not true that \(\lim_{d\to 0} E = \frac{2\sigma}{\epsilon_0}\).

Answer 2 (score 1)

The very short, but perhaps terse answer is that it does not matter on which side of the plate the charge resides. The field outside a charged plate, conducting or not, is \(E = \sigma/2\epsilon_0\) if the surface density of both sides combined is \(\sigma\). The plate does not even have to be thin.

Answer 3 (score 0)

Let us assume a uniform field \(E\) . Applying \(\nabla \cdot D = Q\) , and noting that all components of \(E\) vanish inside a perfect conductor, gives \(\sigma = \epsilon_0 E\) at one surface and \(E\sigma = -\epsilon_0 E\) at the other. The conductors are not infinitesimal sheets.

87: With Newton’s third law, why are things capable of moving? (score 106628 in 2013)

Question

I’ve got a rather humiliating question considering newton’s third law

“If an object A exterts a force on object B, then object B exerts an equal but opposite force on object A” -> \(F_1=-F_2\)

Considering that, why is there motion at all? Should not all forces even themselves out, so nothing moves at all?

When I push a table using my finger, the table applies the same force onto my finger like my finger does on the table just with an opposing direction, nothing happens except that I feel the opposing force.

But why can I push a box on a table by applying force (\(F=ma\)) on one side, obviously outbalancing the force the box has on my finger and at the same time outbalancing the friction the box has on the table?

I obviously have the greater mass and acceleration as for example the matchbox on the table and thusly I can move it, but shouldn’t the third law prevent that from even happening? Shouldn’t the matchbox just accommodate to said force and applying same force to me in opposing direction?

I’ve found a lot of answers considering that question but none was satisfying to an extend that I had an epiphany solving my fundamental problem I’ve got understanding it.

Answer accepted (score 143)

I think it’s a great question, and enjoyed it very much when I grappled with it myself.

Here’s a picture of some of the forces in this scenario.\(^\dagger\) The ones that are the same colour as each other are pairs of equal magnitude, opposite direction forces from Newton’s third law. (W and R are of equal magnitude in opposite directions, but they’re acting on the same object - that’s Newton’s first law in action.)

Finger Prodding Matchbox

While \(F_{matchbox}\) does press back on my finger with an equal magnitude to \(F_{finger}\), it’s no match for \(F_{muscles}\) (even though I’ve not been to the gym in years).

At the matchbox, the forward force from my finger overcomes the friction force from the table. Each object has an imbalance of forces giving rise to acceleration leftwards.

The point of the diagram is to make clear that the third law makes matched pairs of forces that act on different objects. Equilibrium from Newton’s first or second law is about the resultant force at a single object.

\(\dagger\) (Sorry that the finger doesn’t actually touch the matchbox in the diagram. If it had, I wouldn’t have had space for the important safety notice on the matches. I wouldn’t want any children to be harmed because of a misplaced force arrow. Come to think of it, the dagger on this footnote looks a bit sharp.)

Answer 2 (score 27)

I had similar problem in understanding the 3rd law. I found the answer myself while sitting in my study chair which has wheels!

sitting in the chair, I folded my legs up so that they are not in touch with ground. Now I pushed the wall with my hands. Of course, wall didn’t move but my chair and I moved backward! why? because wall pushed me back and wheels could overcome the friction.

I was mixing up things earlier : trying to cancel the forces where one cannot.

Movement of the matchbox is due to the force which you apply on it. period.

Now why you didn’t move when matchbox applied the equal force on you is because of the friction. If you reduce the friction like I did sitting in the chair, you would also move in opposite direction.

Equilibrium can only establish itself when the forces are on the same object..

Alas, I am free from this confusion.. such a relief

Answer 3 (score 25)

Good! This question implies that you’re thinking hard and questioning the laws. It turns out that you are misunderstanding Newton’s 2nd Law though. Motion on a body is due to an external force. F1 acts on your box, but not F2. An object can never act on itself.

88: Will a hole cut into a metal disk expand or shrink when the disc is heated? (score 105382 in 2017)

Question

Take a metal disc and cut a small, circular hole in the center. When you heat the whole thing, will the hole’s diameter increase or decrease? and why? What will happen to the diameter of disc?

Answer accepted (score 64)

Instead of a circular hole, let’s think of a square hole. You can get a square hole two ways, you can cut it out of a complete sheet, or you can get one by cutting a sheet into 9 little squares and throwing away the center one. Since the 8 outer squares all get bigger when heat it, the inner square (the hole) also has to get bigger:

enter image description here

Same thing happens with a round hole.


This is confusing to people because the primary experience they have with stuff getting larger when heated is by cooking. If you leave a hole in the middle of a cookie and cook it, yes, the cookie gets bigger and the hole gets smaller. But the reason for this is that the cookie isn’t so solid. It’s more like a liquid, it’s deforming. And as Ilmari Karonen points out, the cookie sheet isn’t expanding much so there are frictional forces at work.

Answer 2 (score 36)

David Zaslavski’s answer is correct and complete. But I want to propose a different way to look at the problem.

Think of the disc that was cut out, and imagine that you heat it too, exactly as you heat the plate. After heating, the disc will fit in exactly to the hole, just as if it was first heated and then cut out. Therefore, the hole will expand.

Answer 3 (score 35)

Good question! Assuming the disc is uniform and isotropic (the same in different directions), the hole will expand in the same ratio as the metal. You can see this because the thermal expansion equation

\[\mathrm{d} L = L\alpha\mathrm{d}T\]

applies to all lengths associated with the metal, including the circumference of the hole, since the edge of the hole is made out of metal. And if the circumference of the hole expands, so does the diameter.

If you have a disc with different regions that are made of different types of metal, or if the metal that makes up your disc has an anisotropic crystal structure (so that it expands by different factors in different directions), then the analysis is more complicated. But in both cases, I think the hole would still get larger since the overall change in size is still an expansion.

In order to get the hole to shrink, you would need to use a material with a negative thermal expansion coefficient \(\alpha &lt; 0\), which means it gets smaller as the temperature gets higher. In that case the entire disc would shrink as it heats up. Wikipedia has an entry on these kinds of materials (h/t Kevin Reid).

89: Why does the air we blow/exhale out from our mouths change from hot to cold depending on the size of the opening we make with our mouth? (score 105092 in )

Question

Why does the air we blow/exhale out from our mouths change from hot to cold depending on the size of the opening we make with our mouth?

It’s not just a subtle difference, but significant in my opinion. I’m inclined to discredit the notion that it’s just a matter of speed because I can blow fast with an open mouth and still, it’s hot; and blow slow with an almost closed (tighter) mouth and again, it’s cold.

Answer accepted (score 26)

It is mainly due to air entrainment.

If you blow through a tight mouth, there is smaller volume of air but a higher velocity. This pulls in and mixes with a lot of ambient air (Venturi or Bernoulli effect) - in fact typically the air stream is only 40% body warmth and 60% ambient so it will be markedly colder. As an experiment you pucker and blow through a tube held to your mouth, this excludes the ambient air and you will get reduced airflow but at the higher temperature again.

With a wide mouth there is hardly any air entrainment. Its temperature will be almost same as its temperature in your lungs which is higher than the ambient temperature.

Answer 2 (score 1)

I don’t think it’s quite that physical. The pressures involved aren’t that high

When you breathe slowly on your hand the air is war, moist and 37C so feels warm compared to the surroundings, if you blow through a small opening the flow of air increases the cooling and evaporation from your skin.

Answer 3 (score 1)

I don’t think it’s quite that physical. The pressures involved aren’t that high

When you breathe slowly on your hand the air is war, moist and 37C so feels warm compared to the surroundings, if you blow through a small opening the flow of air increases the cooling and evaporation from your skin.

90: How do you start learning physics by yourself? (score 105057 in 2017)

Question

I think this question has its place here because I am sure some of you are “self-taught experts” and can guide me a little through this process.

Considering that :

  • I don’t have any physics scholar background at all.
  • I have a little math background but nothing too complicated like calculus
  • I am a fast learner and am willing to put many efforts into learning physics
  • I am a computer programmer and analyst with a passion for physics laws, theories, studies and everything that helps me understand how things work or making me change the way I see things around me.

Where do I start ? I mean.. Is there even a starting point ? Do I absolutly have to choose a type of physic ? Is it possible to learn physics on your own ?

I know it’s a general question but i’m sure you understand that i’m a little bit in the dark here.

Answer accepted (score 21)

1.) Find something that interests you. The secret to learning is to do something you can be passionate about. For one person it may be building metal detectors (cicuits etc.) and another may be more interested in string theory or crystal physics. Explore your local library’s physics section.

2.) Become competent in the area that interests you. Thomas Edison was home-schooled and taught himself everything, but he also was given the freedom to learn whatever interested him. You will have to use relevant books. The internet alone will not suffice.

3.) Learn the Math. Focus on the theory and the equations should come naturally. Albert Einstein had to learn a lot of math before he could express his ideas in equations. Calculus 1, 2, and 3 are commonly used in much of physics. Khan academy may be useful here.

4.) Find another topic that interests you within physics. That should be easy since you will doubtlessly have stumbled across many fascinating concepts while investigating the first.

5.) Repeat steps 2-4.

There is nothing we can’t do if we work hard, never sleep, and shirk all other responsibility.

Answer 2 (score 16)

Just a note in addition to the advice being given here is this:

ACTUALLY DO THE PROBLEMS. Like on pen and paper. Do not under any circumstances look at a solution and go “Oh yeah, I get this. Next!” That is absolute bull and what many, many people who attempt to self-study physics end up trying and why a lot of them fail. It is very easy to skip on the grinding, difficult work associated with trying to actually solve problems and just read examples and theories but at that point you may as well pick up a popular science book and save yourself some heartbreak.

I am not kidding about this. If you take one piece of advice from this thread, let it be this. I am sure other people who have been formally educated as physicists will echo my sentiment.

Answer 3 (score 16)

Just a note in addition to the advice being given here is this:

ACTUALLY DO THE PROBLEMS. Like on pen and paper. Do not under any circumstances look at a solution and go “Oh yeah, I get this. Next!” That is absolute bull and what many, many people who attempt to self-study physics end up trying and why a lot of them fail. It is very easy to skip on the grinding, difficult work associated with trying to actually solve problems and just read examples and theories but at that point you may as well pick up a popular science book and save yourself some heartbreak.

I am not kidding about this. If you take one piece of advice from this thread, let it be this. I am sure other people who have been formally educated as physicists will echo my sentiment.

91: Stopping potential in the photoelectric effect, collector work function (score 104294 in 2014)

Question

In this question I am talking about the following situation:

enter image description here

Now, I know that the max kinetic energy of the electrons emitted is

\(KE_{max} = h\nu - e\phi_{em}\)

where \(\phi_{em}\) is the work function of the emitter electrode (on the left in the diagram). And my lecturer agrees with that, but he tells us that the stopping potential \(V_0\) can be found using

\(eV_0 = h\nu - e\phi_{col}\)

where \(\phi_{col}\) is the work function of the collector electrode (on the right in the diagram). The emitter and collector electrodes are made from different metals.

What I don’t understand is why the stopping potential doesn’t depend on the kinetic energy of the emitted electrons.

EDIT

I have attached the slide from the lecture course

Lecture slide

Answer accepted (score 4)

But the stopping potential does depend on the kinetic energy of the electrons. The stopping potential is defined as the potential necessary to stop any electron (or, in other words, to stop even the electron with the most kinetic energy) from ‘reaching the other side’.

As you already stated, the maximum kinetic energy is given by \[K_\text{max}=h\nu-e\phi_\text{em}\] In order to stop an electron with this amount of kinetic energy, you have to impose an electric field such that it will lose exactly this amount of energy while traversing it, so that it stops just slightly before reaching the other end of the setup shown in your picture.

The energy gained or lost by a charged object traversing this static electric field is given by the simple formula \[\Delta K=q(V_\text{final}-V_\text{initial})\] In the case of an electron, \(q=e\), while the difference in potential can be denoted by \(\Delta V\equiv V_\text{final}-V_\text{initial}\) Now, if we want to stop the most energetic electrons, but only barely, we have to make sure that \(|\Delta K|=K_\text{max}\). Let us denote the corresponding potential difference, the stopping potential, by \(V_0\). Then, we obtain \[\Delta K=K_\text{max}=eV_0=h\nu-e\phi_{em} \] As you see, we are only considering the most energetic electrons when we want to make sure all of them are stopped. This explains why the velocity does not appear as a variable.

Answer 2 (score 4)

But the stopping potential does depend on the kinetic energy of the electrons. The stopping potential is defined as the potential necessary to stop any electron (or, in other words, to stop even the electron with the most kinetic energy) from ‘reaching the other side’.

As you already stated, the maximum kinetic energy is given by \[K_\text{max}=h\nu-e\phi_\text{em}\] In order to stop an electron with this amount of kinetic energy, you have to impose an electric field such that it will lose exactly this amount of energy while traversing it, so that it stops just slightly before reaching the other end of the setup shown in your picture.

The energy gained or lost by a charged object traversing this static electric field is given by the simple formula \[\Delta K=q(V_\text{final}-V_\text{initial})\] In the case of an electron, \(q=e\), while the difference in potential can be denoted by \(\Delta V\equiv V_\text{final}-V_\text{initial}\) Now, if we want to stop the most energetic electrons, but only barely, we have to make sure that \(|\Delta K|=K_\text{max}\). Let us denote the corresponding potential difference, the stopping potential, by \(V_0\). Then, we obtain \[\Delta K=K_\text{max}=eV_0=h\nu-e\phi_{em} \] As you see, we are only considering the most energetic electrons when we want to make sure all of them are stopped. This explains why the velocity does not appear as a variable.

Answer 3 (score 1)

The potential difference needed to just stop the photoelectrice current is knows as stopping potential.When the electrons of highest energy are stopping the electrons of lower energy are already stopped.Using photoelectric equation. eV=Kmax and Kmax=hv-Q We clearly observe that stopping potential depends upon frequence of incident radiation and work funcgion of emitter.

92: Difference between live and neutral wires (score 103697 in )

Question

In domestic electrical circuits, there are 3 wires - live, earth and neutral. What is the difference between the live and neutral wires?

As there is AC supply, it means that there are no fixed positive and negative terminals. Current rapidly switches direction. Does that not mean that both the live and neutral wires carry the same amount of current, alternatively? Why is the neutral wire called ‘neutral’?

Answer accepted (score 13)

You can either understand the concept of the neutral wire mathematically or practically. Since I’m more of a practical guy,let’s take a look at the bigger picture. There is no neutral wire coming from the generator nor in transmission systems. The neutral wire is only implemented at the distribution (4-wire systems) and reticulation (live and neutral…. And earth) end of the picture.

Why is this you may wonder. The reason is that at the generator and transmission level, the lines or conductors have near identical impedance (ideally identical) therefore, the voltage between each of the 3 lines are of the same magnitude but 120 degrees apart from each other in phase. At the distribution level, your loads are far from identical, in fact each time a consumer of electricity switches the light on, the entire impedance of the distribution network changes.

This means that without a neutral wire, the voltage accross each load and the voltage between phases would be different, which is not ideal for both the consumer and the electrical system as it results in an imbalance of the electrical distribution system. Loads with greater impedance would require a larger volt drop across them than loads with less impedance. The effects of this can be devastating on equipment not designed to handle the changing of the supply voltage, not to mention, your lights would fluctuate between dim and the sun like a disco club. This is where the neutral wire comes into play. The neutral wire is connected at a common point to all three phases. Ideally at \(0\,V\) e.g star configuration.

This ensures that if there is a difference between each phases load impedance, that the voltage is kept constant. Which is why you only have \(220\V\)(RMS) and \(110\,V\) (RMS) or other standard voltage levels. It’s the electrical current that should always be made capable of fluctuating. With the neutral implemented, we get constant voltage accross any load(impedance) with varying current.

How does the neutral wire make this possible? Since the neutral wire is a potential between all three phases, each phase along with the neutral wire can form an independent circuit e.g your house, hence live and neutral. It is the role of the neutral wire to carry any current as a result of the imbalance in impedance of each of the phases loads. This results in the maintenance of a stable standard voltage rating. Remember that voltage is relative to another voltage level.

If \(220\,V\) is high, neutral is on the other hand low, which also means that since there is this potential difference, an electrical circuit may be formed in the first place.

Now, to answer the question posed in this topic, the live wire that can be traced back all the way to the nearest transformer(s) whose phase wires can be traced back to the generator’s stator winding all the way at the power station. Neutral is the wire tied at the low potential end between each phase, enabling the completion of a circuit and maintaining a stable voltage level.

Since the neutral wire completes and electrical circuit (in terms of alternating current) it carries the same current as the live or phase wire tracing back to the generator, however, it’s potential to earth is nearly \(0\,V\). The voltage between the phase to earth would be \(220\,V\), so the phase wire would alternate current direction between maximum positive and maximum negative peaks of the AC cycle.

Answer 2 (score 3)

Live, neutral,and earth, are labels that are used to convey some information on the use of each wire.
You are correct in thinking that in a typical two wire AC loop/circuit, both wires carry the same amount of current (amps). So, both wires could be considered live wires.
In a 3 wire circuit, the neutral wire is created when you have a power source that is center tapped (CT), thereby creating two loops that are 180 degrees out of phase with each other. For example, when you have 220V center tapped, each loop is 120V with respect to the CT. If each loop has the same load (current), then the CT current is zero. Because of this, the CT wire is called neutral wire (even when it carries some current).
Because of safety concerns, the neutral wire is grounded at the mains panel, and the grounding wire is called earth because it is physically grounded to earth (via water pipes or metal rod). To sum up, a live wire carries the full load current, while a neutral wire carries some current, only when the loads are not balanced.

Answer 3 (score 1)

What is the difference between the live and neutral wires?

LIVE WIRE The live wire is connected directly to the generators of the electricity supply company.It carries current at high voltages (about \(220-230\,\mathrm{V}\)).

NEUTRAL WIRE The neutral wire returns the electricity to the generator after it has passed through the appliance.The neutral wire completes the circuit.The neutral wire is at approximately \(0\,\mathrm{V}\) but to be safe you must NEVER touch this wire either. If the wiring is faulty it may be carrying the same electricity as the live wire.

Does that not mean that both the live and neutral wires carry the same amount of current, alternatively?

Yes,they do. As I have mentioned above. The live wire carries the current towards domestic appliances and others instruments and the neutral wire emerges from the appliance and carries the current back to source.

Why is the neutral wire called ‘neutral’?

Basically it is so called as it is at \(0\,\mathrm{V}\). Nothing else.

93: What is the basic difference between absolute pressure and gauge pressure? (score 103188 in 2018)

Question

What is the basic difference between absolute pressure and gauge pressure?

Answer accepted (score 1)

According to Wikipedia :

Absolute pressure is zero-referenced against a perfect vacuum, so it is equal to gauge pressure plus atmospheric pressure.

Gauge pressure is zero-referenced against ambient air pressure, so it is equal to absolute pressure minus atmospheric pressure. Negative signs are usually omitted. To distinguish a negative pressure, the value may be appended with the word “vacuum” or the gauge may be labeled a “vacuum gauge.”

So suppose you are in an ocean at height depth,your absolute pressure would be \(Patm+\rho*g*d\) while your guage pressure would be simply \(\rho*g*d\), also you can say if at any point your total pressure is P your absolute pressure is \(P\) while your guage pressure is \(P - Patm\)

94: How to get distance when acceleration is not constant? (score 100744 in 2013)

Question

I have a background in calculus but don’t really know anything about physics. Forgive me if this is a really basic question.

The equation for distance of an accelerating object with constant acceleration is:

\[d=ut +\frac{1}{2}at^2\]

which can also be expressed

\[d=\frac{\mathrm{d}x}{\mathrm{d}t}t+\frac{\mathrm{d^2}x}{\mathrm{d}t^2}\frac{t^2}{2}\]

(where x(t) is the position of the object at time t)

That’s fine for a canonball or something like that, but what about a car accelerating from 0 to cruising speed? The acceleration is obviously not constant, but what about the change in acceleration? Is it constant? I suspect not. And then what about the change in the change of acceleration, etc. etc.? In other words, how does one know how many additional terms to add in the series?

\[d=\frac{\mathrm{d}x}{\mathrm{d}t}t+\frac{\mathrm{d^2}x}{\mathrm{d}t^2}\frac{t^2}{2}+\frac{\mathrm{d^3}x}{\mathrm{d}t^3}\frac{t^3}{3}+\frac{\mathrm{d^4}x}{\mathrm{d}t^4}\frac{t^4}{4}\cdot etc. \cdot ?\]

Answer accepted (score 7)

Technically, the equation

\[d = \frac{\mathrm{d}x}{\mathrm{d}t}t + \frac{\mathrm{d}^2x}{\mathrm{d}t^2}\frac{t^2}{2}\]

is not right. Instead, for constant acceleration, you need

\[d = \left(\left.\frac{\mathrm{d}x}{\mathrm{d}t}\right|_0\right) t + \left(\left.\frac{\mathrm{d}^2x}{\mathrm{d}t^2}\right|_0\right) \frac{t^2}{2}\]

In other words, a quantity like \(\mathrm{d}x/\mathrm{d}t\) changes in time, but you want to use the initial velocity only. I think this is what you probably intended to begin with, though.

If you wanted to solve the problem purely kinematically, then you could try to expand the position in a Taylor series as you wrote in your answer. However, this only works if the function is equal to its Taylor series. For simple functions like exponentials and trig functions this is true, but for a person driving a car it is not. If a function equals its Taylor series everywhere, then if you observe its position over any finite interval of time, no matter how short, you can completely determine what the car will do in the future. This is not realistic.

Instead, you will want some way of determining either the velocity or the acceleration as a function of time or position. In physics, it is common to be able to determine the acceleration as a function of position. The reason is that acceleration comes from the equation \[F=ma\] so that if you can determine the forces present, you know the acceleration, and higher-order derivatives are not necessary.

If you know the velocity as a function of time, you can simply integrate it to find the displacement. \[d(t) = \int_{t_0}^t v(t&#39;) \mathrm{d}t&#39;\]

If you know the acceleration as a function of time, you can integrate that too, although this situation is less common.

\[d(t) = v_0(t - t_0) + t\int_{t_0}^t a(t&#39;)\mathrm{d}t&#39; - \int_{t_0}^t t&#39;a(t&#39;)\mathrm{d}t&#39;\]

I found this expression by looking for something whose derivative with respect to time was the velocity

\[v(t) = v_0 + \int_{t_0}^t a(t&#39;)\mathrm{d}t&#39;\]

If you know the velocity as a function of position, you have the differential equation

\[\frac{\mathrm{d}x}{\mathrm{d}t} = v(x)\]

which you can solve by separation of variables.

If you know the acceleration as a function of position, you have the differential equation

\[\frac{\mathrm{d}^2x}{\mathrm{d}t^2} = a(x)\]

which is not always easy to solve. In more realistic scenarios, the acceleration will depend not only on the object’s own position, but also on the positions of the things it’s interacting with. This gives coupled differential equations, which can be simplified in a special cases, but frequently can only be solved numerically.

Answer 2 (score 7)

Technically, the equation

\[d = \frac{\mathrm{d}x}{\mathrm{d}t}t + \frac{\mathrm{d}^2x}{\mathrm{d}t^2}\frac{t^2}{2}\]

is not right. Instead, for constant acceleration, you need

\[d = \left(\left.\frac{\mathrm{d}x}{\mathrm{d}t}\right|_0\right) t + \left(\left.\frac{\mathrm{d}^2x}{\mathrm{d}t^2}\right|_0\right) \frac{t^2}{2}\]

In other words, a quantity like \(\mathrm{d}x/\mathrm{d}t\) changes in time, but you want to use the initial velocity only. I think this is what you probably intended to begin with, though.

If you wanted to solve the problem purely kinematically, then you could try to expand the position in a Taylor series as you wrote in your answer. However, this only works if the function is equal to its Taylor series. For simple functions like exponentials and trig functions this is true, but for a person driving a car it is not. If a function equals its Taylor series everywhere, then if you observe its position over any finite interval of time, no matter how short, you can completely determine what the car will do in the future. This is not realistic.

Instead, you will want some way of determining either the velocity or the acceleration as a function of time or position. In physics, it is common to be able to determine the acceleration as a function of position. The reason is that acceleration comes from the equation \[F=ma\] so that if you can determine the forces present, you know the acceleration, and higher-order derivatives are not necessary.

If you know the velocity as a function of time, you can simply integrate it to find the displacement. \[d(t) = \int_{t_0}^t v(t&#39;) \mathrm{d}t&#39;\]

If you know the acceleration as a function of time, you can integrate that too, although this situation is less common.

\[d(t) = v_0(t - t_0) + t\int_{t_0}^t a(t&#39;)\mathrm{d}t&#39; - \int_{t_0}^t t&#39;a(t&#39;)\mathrm{d}t&#39;\]

I found this expression by looking for something whose derivative with respect to time was the velocity

\[v(t) = v_0 + \int_{t_0}^t a(t&#39;)\mathrm{d}t&#39;\]

If you know the velocity as a function of position, you have the differential equation

\[\frac{\mathrm{d}x}{\mathrm{d}t} = v(x)\]

which you can solve by separation of variables.

If you know the acceleration as a function of position, you have the differential equation

\[\frac{\mathrm{d}^2x}{\mathrm{d}t^2} = a(x)\]

which is not always easy to solve. In more realistic scenarios, the acceleration will depend not only on the object’s own position, but also on the positions of the things it’s interacting with. This gives coupled differential equations, which can be simplified in a special cases, but frequently can only be solved numerically.

Answer 3 (score 4)

You can keep on adding higher order derivatives until they become vanishingly small. A convenient point of entry to this topic would be the Wikipedia article Jerk (physics).

Bear in mind that when you’re in a car, jerk is only of relevance during the time when the accelerator pedal is actually moving, to a first-order approximation.

Update: It seems a question with a great deal of relevance to yours was posed a few hours ago on math.se - What is an example of an application of a higher order derivative (\(y^{(n)}\), \(n≥4\))?. Arturo’s answer expands on higher derivatives in kinematics (jounce!), whilst Greg’s answer includes a source of jerk in driving I didn’t consider (steering).

95: Why is jumping into water from high altitude fatal? (score 99783 in 2014)

Question

If I jump from an airplane straight positioned upright into the ocean, why is it the same as jumping straight on the ground?

Water is a liquid as opposed to the ground, so I would expect that by plunging straight in the water, I would enter it aerodynamically and then be slowed in the water.

Answer accepted (score 42)

When you would enter the water, you need to “get the water out of the way”. Say you need to get 50 liters of water out of the way. In a very short time you need to move this water by a few centimeters. That means the water needs to be accelerated in this short time first, and accelerating 50 kg of matter with your own body in this very short time will deform your body, no matter whether the matter is solid, liquid, or gas.

The interesting part is, it does not matter how you enter the water—it is not really relevant (regarding being fatal) in which position you enter the water at a high velocity. And you will be slowing your speed in the water, but too quickly for your body to keep up with the forces from different parts of your body being decelerated at different times.

Basically I’m making a very rough estimate whether it would kill, only taking into account one factor, that the water needs to be moved away. And conclude it will still kill, so I do not even try to find all the other ways it would kill.

Update - revised:

One of the effects left out for the estimate is the surface tension.
It seems to not cause a relevant part of the forces - the contribution exists, but is negligibly small. That is depending on the size of the object that is entering the water - for a small object, it would be different.

(see answers of How much of the forces when entering water is related to surface tension?)

Answer 2 (score 21)

Let’s look at this another way: you’re just moving from one fluid to another. Sounds harmless, right? By specification of the problem, we’re at terminal velocity when we hit the water. The force of drag (in both mediums) is roughly:

\[ F_D\, =\, \tfrac12\, \rho\, v^2\, C_D\, A = \rho \left( \frac{1}{2} v^2 C_D A \right) \]

You can imagine that everything except for the density term is the same as you initially transition from the air medium to water. This isn’t perfectly accurate, because these are very different Reynolds numbers, but it’s good enough for here.

That means that the force (and correspondingly, acceleration) will simply change by the same factor that the density changes by. Also, we know the original acceleration due to drag was 1g, in order to perfectly counteract gravity, which is the definition of terminal velocity. That leads to a simple estimation of the acceleration upon hitting the water. I’ll assume we’re at sea level.

\[ \frac{a_2}{a_1} = \frac{ a_2 }{1 g}= \frac{ \rho_{H20} } { \rho_{Air} } = \frac{1000}{1.3} \\ a_2 \approx 770 g \]

The maximum acceleration a person can tolerate depends on the duration of the acceleration, but there is an upper limit that you will not tolerate (without death) for any amount of time. You can see from literature on this subject, NASA’s graphs don’t even bother going above 100g.

Note that a graceful diver’s entry will not help you - that’s because an aerodynamic position also increases the velocity at which you hit.

Answer 3 (score 20)

Consider jumping into a swimming pool. Do a barrel-roll (sorry I mean cannon ball, that just kind of slipped out). It’s fun, you enter the water nicely and make a huge splash, probably soaking your sister in the process (that’ll learn her). Now do a belly flop. Not as fun. You displace exactly the same amount of water in the same time, but this time there is a lot more pain and you come away with red skin and maybe some bruising. The difference? You cover more area in a belly-flop than a cannon ball.

At extreme velocities, accelerating your body’s mass of water will kill you anyway. However, what actually kills you is hitting the surface. Dip your hand in water… easy. Now slap the surface…. it’s like hitting the table (almost). Pressures caused by breaking the surface make water act more solid on shorter timescales, which is why they say hitting water at high speeds is like hitting concrete; on those short times, it is actually like concrete!

96: Charging 12V 150Ah battery (score 98455 in 2012)

Question

I want to charge a 12V battery of 150Ah with a solar panel. The solar panel specs is 12V, 25 Watt.

Can anyone please provide me how to calculate that how much time it will take to charge the battery? Please provide the calculations and formulas.

Answer accepted (score 3)

Watts (electrical power) = Volts \(\cdot\) Amps, so 25W = 12V \(\cdot\) 2.1A

150Amp Hour is the total capacity so 150amp \(\cdot\) 1hour, 1amp \(\cdot\) 150hours, or 2.1amp for 72hours.

That’s in an ideal world of course, there are heating losses as you charge the battery, the voltage of the solar panel varies with the load and if you entirely empty a 12V lead acid battery you are likely to damage it. But basically you are looking at 10days of full sunshine

Answer 2 (score 2)

The answers you’ve got so far from Vladimir and Martin give you a good first-order approximation: power = current x voltage. Energy = power x time. So your 12V battery of 150Ah needs 1800Wh of energy (12 x 150). So a 25W PV panel would need 72 hours at full output (1800Wh/25W).

The equation is: Peak-hours required = $ $

That’s an unusual combination of quite a small panel and a very big battery - is the battery designed to be a multi-week store for a low-power system, by any chance?

For a more accurate answer, you need a lot more information.

The output of the panel at any moment will depend on:

  1. the voltage, (which will be determined by the battery, and will change as the battery charges)

  2. the ambient temperature, and

  3. the amount light hitting the panel. The amount of light hitting the panel will depend on panel tilt, orientation, overshading, weather, altitude, location, time of year, time of day.

For (1), you need the panel IV curve, typically presented on the manufacturer’s datasheet. That datasheet will also tell you how output varies with panel temperature. You can then estimate the panel’s temperature from the ambient temperature - it will (to the first order) be a fairly steady amount above ambient.

Here are some example power curves for a 12V PV (and it looks like it’s approx 125W peak) panel, from altestore.com :

PV panel I-V curve

You also need to know about the battery’s characteristics as it charges: the heat losses, and how its voltage changes with temperature and % charged. In a more complex system that involves a controller too, you’d need to know how the controller behaves, how it tracks the maximum power point, and what its internal losses are. A controller is very desirable, not only because it will track the PV panel’s maximum power point, but also because it will prevent the battery over-charging.

There are various online PV calculators that will combine some of this information to give you estimates of output, such as the US NREL PVWATTS calculator. And there are free online GIS insolation resources to give you the raw source information to do detailed calculations yourself, such as the EU PVGIS database.

Answer 3 (score -1)

150 Amperhour is the battery charge capacity. With power of 25 Watt = 25 AV at V = 12 V you will supply 25/12 Coulomb each second (Amper*s). So the charge time is equal to 150/(25/12) = 72 hours.

97: What causes an electric shock - Current or Voltage? (score 97349 in 2013)

Question

Though voltage and current are two interdependent physical quantity, I would like to know what gives more “shock” to a person - Voltage or Current? In simple words, will it cause more “electric - shock” when the voltage is high or Current is more?

Answer accepted (score 11)

You’d have to define ‘shock’, but what kills you is enough current during enough time, not voltage. Of course you need enough voltage to keep the current going over your body’s resistance, but it definitely plays a secondary role.

In a former professional life I worked developing Residual Current Circuit Breakers, and 30 mA is the usual rating for devices aiming at protecting lives. In wet environments, such as bathrooms or swimming pools, sometimes 10 mA is recommended.

According to wikipedia’s RCCB article, 25-40 ms of 30 mA is enough to send your heart into fibrillation, which probably qualifies as a pretty strong shock. That would require your heart being in the path of the current, though. This link has more information on what to expect depending, again, on the current, not the voltage.

Answer 2 (score 8)

If we model the path for the current through the human body as a resistor, then by Ohm’s law, the current and voltage are proportional.

That is, a greater current through the body will be associated with a greater voltage across the body.

Having said that, let’s consider the source of the shock. It is the case that sources may produce a large voltage but are not capable or sourcing significant current. These sources are said to have high internal resistance.

The point is that a high-voltage source with high internal resistance may not give you much of a shock at all while a lower voltage source with low internal resistance may kill you.

So, there really isn’t a simple answer to your question other than “it depends”.

Answer 3 (score 4)

You see, current in one direction indicates the flow of free electrons in the other direction. Hence, current flows. But voltage does not flow. It’s actually the work done per unit charge. (Joule/Coulomb). In other words, it’s an energy or simply the potential difference between two points. Sometimes, a bird does not get a shock while resting on a carrier wire (Not all time the bird sits on the same wire). Whereas By touching the live wire and ground, we create a potential difference (a path which is sufficient enough for current to flow) with live wire relative to earth. Hence, we get a shock…

Electric shock is the sufficient amount of current flowing through human body which the person could feel. But, this doesn’t mean that shock is not caused by Voltage. Of course, it depends upon voltage which is given by Ohm’s law\[I=\frac{V}{R}\]

The shock your body feels, depends upon the applied voltage & resistance of your body. If your body has a resistance of about 10,000 \(\Omega\), and the voltage is 230 V, then \[I=\frac{230}{10000}=23 mA\]

You would get a shock, but seriously… (You can’t let go off this current)

Hence, you cannot differentiate a shock based on voltage or current. It depends on both voltage and resistance of your body (which implies current).

Here’s a page which shows current paths and another one which provides comparison info..!

98: I don’t understand what we really mean by voltage drop (score 96751 in 2019)

Question

This post is my best effort to seek assistance on a topic which is quite vague to me, so that I am struggling to formulate my questions. I hope that someone will be able to figure out what it is I’m trying to articulate.

If we have a circuit with a resistor, we speak of the voltage drop across the resistor.

I understand all of the calculations involved in voltage drop (ohm’s law, parallel and series, etc.). But what I seek is to understand on a conceptual level what voltage drop is. Specifically: what is the nature of the change that has taken place between a point just before the resistor and a point just after the resistor, as the electrons travel from a negatively to a positively charged terminal.

Now as I understand it, “voltage” is the force caused by the imbalance of charge which causes pressure for electrons to travel from a negatively charged terminal to a positively charged terminal, and “resistance” is a force caused by a material which, due to its atomic makeup, causes electrons to collide with its atoms, thus opposing that flow of electrons, or “current”. So I think I somewhat understand voltage and resistance on a conceptual level.

But what is “voltage drop”? Here’s what I have so far:

  • Voltage drop has nothing to do with number of electrons, meaning that the number of electrons in the atoms just before entering the resistor equals the number of atoms just after

  • Voltage drop also has nothing to do with the speed of the electrons: that speed is constant throughout the circuit

  • Voltage drop has to do with the release of energy caused by the resistor.

Maybe someone can help me understand what voltage drop is by explaining what measurable difference there is between points before the resistor and points after the resistor.

Here’s something that may be contributing to my confusion regarding voltage drop: if voltage is the difference in electrons between the positive terminal and the negative terminal, then shouldn’t the voltage be constant at every single point between the positive terminal and the negative terminal? Obviously this is not true, but I’d like to get clarification as to why.

Perhaps I can clarify what I’m trying to get at with the famous waterwheel analogy: we have a pond below, a reservoir above, a pump pumping water up from the pond to the reservoir, and on the way down from the reservoir, the water passes through a waterwheel, the waterwheel being analogous to the resistor. So if I were to stick my hand in the water on its way down from the reservoir, would I feel anything different, depending on whether I stuck my hand above or below the waterwheel? I hope that this question clarifies what it is I’m trying to understand about voltage drop.

EDIT: I have read and thought about the issue more, so I’m adding what I’ve since learned:

It seems that the energy which is caused by the voltage difference between the positive and negative terminals is used up as the electrons travel through the resistor, so apparently, it is this expenditure of energy which is referred to as the voltage drop.

So it would help if someone could clarify in what tangible, empirical way could we see or measure that there has been an expenditure of energy by comparing a point on the circuit before the resistor and a point on the circuit after the resistor.

EDIT # 2: I think at this point what’s throwing me the most is the very term “voltage drop”.

I’m going to repeat the part of my question which seems to be still bothering me the most:

“Here’s something that may be contributing to my confusion regarding voltage drop: if voltage is the difference in electrons between the positive terminal and the negative terminal, then shouldn’t the voltage be constant at every single point between the positive terminal and the negative terminal? Obviously this is not true, but I’d like to get clarification as to why.”

In other words, whatever takes place across the resistor, how can we call this a “voltage drop” when the voltage is a function of the difference in number of electrons between the positive terminal and negative terminal?

Now I’ve been understanding the word drop all along as “reduction”, and so I’ve been interpreting “voltage drop” as “reduction in voltage”. Is this what the phrase means?

Since I’ve read that voltage in all cases is a measurement between two points, then a reduction in voltage would necessarily require four different points: two points to delineate the voltage prior to the drop and two points to delineate the voltage after the drop, so which 4 points are we referring to?

Perhaps a more accurate term would have been “drop in the potential energy caused by the voltage” as opposed to a drop in the voltage?

EDIT # 3: I think that I’ve identified another point which has been a major (perhaps the major) contribution to the confusion I’ve been having all along, and that is what I regard as a bit of a contradiction between two essential definitions of voltage.

When we speak of a 1.5V battery, even before it is hooked up to any wiring / switches / load / resistors / whatever, we are speaking of voltage as a function of nothing other than the difference in electric charge between the positive and negative terminals, i.e the difference in excess electrons between the two terminals.

Since there is a difference in number of electrons only in reference to the terminals, I therefore have been finding it confusing to discuss voltage between any other two points along the circuit – how could this be a meaningful issue, since the only points on the circuit where there is a difference in the number of electrons is at the terminals – so how can we discuss voltage at any other points?

But there is another definition of voltage, which does make perfect sense in the context of any two points along a circuit. Here we are speaking of voltage in the context of Ohm’s law: current/resistance. Of course, in this sense, voltage makes sense at any two points, and since resistance can vary at various points along the circuit, so clearly voltage can vary at different points along the circuit.

But, unlike the first sense of voltage, where the voltage is a result of the difference in electrons between the terminals, when we speak of voltage between two points along the circuit, say, between a point just before a resistor and a point just after the resistor, we are not saying that there any difference in number of electrons between these two points.

I believe that it is this precise point which has been the main source of my confusion all along, and that’s what I’ve been trying to get at all along. And this is what I’ve been struggling to ask all along: okay, in a battery, you can tell me that there is a voltage difference between the two terminals, meaning that you can show me, tangibly and empirically, that the atoms at the positive terminal have a deficit of electrons, and the atoms at the negative terminal have a surplus of electrons, and this is what we mean by the voltage between the two, then I can understand that.

But in contrast, I accept that there is voltage (I/R) between a point just before a resistor and just after a resistor – but can you take those two points, the one before the resistor and the one after the resistor, and show me any measurable qualitative difference between the two? Certainly there is no difference between the number of electrons in the atoms of those two points. In point of fact, I believe that there is no measurable difference between the two points.

Ah, now you’ll tell me that you can show me the difference between the two points: you’ll hook up a voltmeter to the two points, and that shows the voltage between them!

Sure, the voltmeter is telling us that something has happened between the two points. But the voltmeter does not tell us anything inherent in the points themselves – unlike the two terminals of a battery, where there is an inherent difference between the two points: one has more excess electrons than the other – that is a very inherent, concrete difference.

I guess what we can say is that the electrons travelling at a point just before the resistor are travelling with more energy than the electrons travelling at a point just after the resistor. But is there any way of observing the difference in energy other than a device that simply tells us that the amount of energy has dropped between the two points?

Let me try another way: we could also hook up a voltmeter to the two battery terminals, and the reading would indicate that there is voltage between the two terminals. And if I would ask you yes, but what is it about those two points that is causing that voltage, you could then say, sure: look at the difference in electrons between the two points – that is the cause for the reading of the voltmeter.

In contrast, when we hook up the voltmeter to the points just before and after the resistor, and the reading indicates a voltage between the two terminals. But in this case if I would now ask you the same question: yes, but what is it about those two points that is causing the voltage, I’m not sure if you’d have an answer.

I think this crucially fundamental difference between the two senses of voltage is generally lost in such discussions.

Answer accepted (score 29)

Perhaps I can clarify what I’m trying to get at with the famous waterwheel analogy

99 years ago, Nehemiah Hawkins published what I think is a marginally better analogy:

enter image description here
Fig. 38. — Hydrostatic analogy of fall of potential in an electrical circuit.


Explanation of above diagram

  • In this diagram, a pump at bottom centre is pumping water from right to left.
  • The water circulates back to the start through the upper horizontal pipe marked a-b
  • The height of water in the vertical columns C,m’,n’,o’,D indicates pressure at points a,m,n,o,b
  • The pressure drops from a to b due to the resistance of the narrow return path
  • The pressure difference between a and b is proportional to the height difference between C and D

Analogy

  • Pump = Battery
  • Water = Electric charge carriers
  • Pressure = Voltage
  • Vertical Pipes = Voltmeters
  • pipe a-b = Resistor (or series of four resistors)

Note

  • A “particle” of water at a has a higher potential energy than it has when it reaches b.

There is a pressure drop across a “resistive” tube.

Voltage (electric potential) is roughly analogous to water pressure (hydrostatic potential).

If you could open a small hole at points a,m,n,o,b in the tube and hold your finger against the hole, you would be able to feel the pressure at those points is different.

The potential at some point is the amount of potential energy of a “particle” at that point.


it would help if someone could clarify in what tangible, empirical way could we see or measure that there has been an expenditure of energy by comparing a point on the circuit before the resistor and a point on the circuit after the resistor.
  1. Purchase a 330 ohm 1/4 watt resistor and a 9V PP3 battery
  2. Place the resistor across the battery terminals
  3. Place your finger on the resistor.
  4. Wait.

Answer 2 (score 10)

“voltage” is the force caused by the imbalance of charge which causes pressure for electrons to travel from a negatively charged terminal to a positively charged terminal,

Nope, voltage is not a force. Voltage is a difference in potential energy per unit charge. More precisely: electric potential is the potential energy per unit charge (just like \(gh\) is the gravitational potential energy per unit mass), and a voltage (a.k.a. voltage difference a.k.a. voltage drop) is a difference in electrical potential between two points.

The actual value of electric potential at any point has no physical meaning; only its difference relative to the electrical potential at some other point, i.e. the voltage, is meaningful or measurable. This means the whole idea of voltage is inherently bound to a choice of two points. There’s no measurement you can make at a single point only that will tell you anything about voltage or electric potential. However, if you have two points, you can determine the voltage between them by pushing a unit charge from one point to the other and measuring how much work it takes (or gives). This is how we can establish voltages in a circuit with resistive elements: move a charge through the circuit from one point to another and see how much energy needs to be put in to get it there.

The reason it takes energy is fundamentally complicated, having to do with quantum mechanical effects, but as a rough classical model, you could say that the electrons lose energy from colliding with the atoms and molecules of the resistive material, and you need to put in enough energy to make up for those losses.

Answer 3 (score 10)

“voltage” is the force caused by the imbalance of charge which causes pressure for electrons to travel from a negatively charged terminal to a positively charged terminal,

Nope, voltage is not a force. Voltage is a difference in potential energy per unit charge. More precisely: electric potential is the potential energy per unit charge (just like \(gh\) is the gravitational potential energy per unit mass), and a voltage (a.k.a. voltage difference a.k.a. voltage drop) is a difference in electrical potential between two points.

The actual value of electric potential at any point has no physical meaning; only its difference relative to the electrical potential at some other point, i.e. the voltage, is meaningful or measurable. This means the whole idea of voltage is inherently bound to a choice of two points. There’s no measurement you can make at a single point only that will tell you anything about voltage or electric potential. However, if you have two points, you can determine the voltage between them by pushing a unit charge from one point to the other and measuring how much work it takes (or gives). This is how we can establish voltages in a circuit with resistive elements: move a charge through the circuit from one point to another and see how much energy needs to be put in to get it there.

The reason it takes energy is fundamentally complicated, having to do with quantum mechanical effects, but as a rough classical model, you could say that the electrons lose energy from colliding with the atoms and molecules of the resistive material, and you need to put in enough energy to make up for those losses.

99: Why does ice melting not change the water level in a container? (score 96586 in 2014)

Question

I have read the explanation for this in several textbooks, but I am struggling to understand it via Archimedes’ principle. If someone can clarify with a diagram or something so I can understand or a clear equation explanation that would be great.

Answer accepted (score 46)

Good question.

Assume we have one cube of ice in a glass of water. The ice displaces some of that water, raising the height of the water by an amount we will call \(h\).

Archimedes’ principle states that the weight of water displaced will equal the upward buoyancy force provided by that water. In this case,

\[\text{Weight of water displaced} = m_\text{water displaced}g = \rho Vg = \rho Ahg\]

where \(V\) is volume of water displaced, \(\rho\) is density of water, \(A\) is the surface area of the glass and \(g\) is acceleration due to gravity.

Therefore the upward buoyancy force acting on the ice is \(\rho Ahg\).

Now the downward weight of ice is \(m_\text{ice}g\).

Now because the ice is neither sinking nor floating, these must balance. That is:

\[\rho Ahg = m_\text{ice}g\]

Therefore,

\[h = \frac{m_\text{ice}}{\rho A}\]

Now when the ice melts, this height difference due to buoyancy goes to 0. But now an additional mass \(m_\text{ice}\) of water has been added to the cup in the form of water. Since mass is conserved, the mass of ice that has melted has been turned into an equivalent mass of water.

The volume of such water added to the cup is thus:

\[V = \frac{m_\text{ice}}{\rho}\]

and therefore,

\[Ah = \frac{m_\text{ice}}{\rho}\]

So,

\[h = \frac{m_\text{ice}}{\rho A}\]

That is, the height the water has increased due to the melted ice is exactly the same as the height increase due to buoyancy before the ice had melted.

Edit:For completion, since it is raised as a question in the comments

Melting icebergs boost sea level rise, because the water they contain is not salty.

Although most of the contributions to sea-level rise come from water and ice moving from land into the ocean, it turns out that the melting of floating ice causes a small amount of sea-level rise, too.

Fresh water, of which icebergs are made, is less dense than salty sea water. So while the amount of sea water displaced by the iceberg is equal to its weight, the melted fresh water will take up a slightly larger volume than the displaced salt water. This results in a small increase in the water level.

Globally, it doesn’t sound like much – just 0.049 millimetres per year – but if all the sea ice currently bobbing on the oceans were to melt, it could raise sea level by 4 to 6 centimeters.

Answer 2 (score 46)

Good question.

Assume we have one cube of ice in a glass of water. The ice displaces some of that water, raising the height of the water by an amount we will call \(h\).

Archimedes’ principle states that the weight of water displaced will equal the upward buoyancy force provided by that water. In this case,

\[\text{Weight of water displaced} = m_\text{water displaced}g = \rho Vg = \rho Ahg\]

where \(V\) is volume of water displaced, \(\rho\) is density of water, \(A\) is the surface area of the glass and \(g\) is acceleration due to gravity.

Therefore the upward buoyancy force acting on the ice is \(\rho Ahg\).

Now the downward weight of ice is \(m_\text{ice}g\).

Now because the ice is neither sinking nor floating, these must balance. That is:

\[\rho Ahg = m_\text{ice}g\]

Therefore,

\[h = \frac{m_\text{ice}}{\rho A}\]

Now when the ice melts, this height difference due to buoyancy goes to 0. But now an additional mass \(m_\text{ice}\) of water has been added to the cup in the form of water. Since mass is conserved, the mass of ice that has melted has been turned into an equivalent mass of water.

The volume of such water added to the cup is thus:

\[V = \frac{m_\text{ice}}{\rho}\]

and therefore,

\[Ah = \frac{m_\text{ice}}{\rho}\]

So,

\[h = \frac{m_\text{ice}}{\rho A}\]

That is, the height the water has increased due to the melted ice is exactly the same as the height increase due to buoyancy before the ice had melted.

Edit:For completion, since it is raised as a question in the comments

Melting icebergs boost sea level rise, because the water they contain is not salty.

Although most of the contributions to sea-level rise come from water and ice moving from land into the ocean, it turns out that the melting of floating ice causes a small amount of sea-level rise, too.

Fresh water, of which icebergs are made, is less dense than salty sea water. So while the amount of sea water displaced by the iceberg is equal to its weight, the melted fresh water will take up a slightly larger volume than the displaced salt water. This results in a small increase in the water level.

Globally, it doesn’t sound like much – just 0.049 millimetres per year – but if all the sea ice currently bobbing on the oceans were to melt, it could raise sea level by 4 to 6 centimeters.

Answer 3 (score 2)

Brandon, above, gets right to the point. Frozen water displaces its own mass in the rest of the water, which means in effect it displaces an amount equal to itself. While frozen it is larger in volume, and thus less dense, because of hydrogen bonding – that’s why it floats – and when it melts it returns to the liquid state (surprise!) at essentially the same density as the surrounding water. A given quantity of water, temporarily larger in volume but correspondingly less dense because it has frozen, returning to the liquid state will thus not raise the overall level of water (assuming here no evaporation, mosquitoes stopping in to have a sip, etc etc) –

100: How is light affected by gravity? (score 96345 in 2012)

Question

Light is clearly affected by gravity, just think about a black hole, but light supposedly has no mass and gravity only affects objects with mass.

On the other hand, if light does have mass then doesn’t mass become infinitely larger the closer to the speed of light an object travels. So this would result in light have an infinite mass which is impossible.

Any explanations?

Answer accepted (score 34)

In general relativity, gravity affects anything with energy. While light doesn’t have rest-mass, it still has energy — and is thus affected by gravity.

If you think of gravity as a distortion in space-time (a la general relativity), it doesn’t matter what the secondary object is. As long as it exists, gravity affects it.

Answer 2 (score 15)

When you think about how gravity affects light you really need to think in terms of general relativity, which describes gravity as the effect of a curved space-time on moving particles. Summarized by John Wheeler, mass tells space how to curve and space-time tells mass how to move.

When we apply this to light we start with the fact that light travels in straight lines (null geodesics). However when we have a large mass (say the Sun) it curves space around it, so our light ray will follow a straight line in this curved space-time. This path will appear bent to us and leads to the phenomena of gravitational lensing.

As an aside, this also easily answers the question of why all masses fall at the same rate (or gravitational mass equals inertial mass). Massive particles also follow geodesics (straight lines) in this curved space-time, so if I throw two objects of different masses with the same initial velocity they will follow the same geodesic through space-time, and we see this as both objects having the same acceleration.

Answer 3 (score 8)

The source of gravity in general relativity is an object called the stress-energy tensor, which includes energy density, momentum density, energy flux, momentum flux (which includes shear stress and pressure) etc. Obviously, light has energy, so it acts gravitationally in GR. Since \(E = mc^2\), we see that mass contributes an enormous amount of energy - so, massive objects have very strong gravitational fields, so that the other terms are negligible, which is why Newton’s law works so well. However, they are there - so, light does have a gravitational field, even though it has zero mass.