Wednesday, November 17, 2010

Pain Prevalence Increases at the End of Life

The prevalence of pain increases rapidly in the last few months of life, regardless of the cause of death, a study in the Annals of Internal Medicine reports.


Researchers used data from the national Health and Retirement Study to examine levels of pain in some 4700 people over age 50 who died within 24 months of being interviewed about their pain levels. The participants were placed within 24 groups, according to the time span between their interviews and death.


Clinically significant pain — defined as pain of moderate or severe intensity most of the time — was present in about one quarter of the patients over the first 20 months. However, starting about 4 months before death, the prevalence of clinically significant pain began rising steadily, reaching a prevalence of about 50% in the final month of life. (The rate reached 60% among those with arthritis.)


An editorialist advises clinicians to routinely ask about clinically significant pain and be prepared to treat it.

Saturday, August 28, 2010

MP3 Players Associated with Short-Term Hearing Loss

Listening to an MP3 player for just an hour can lead to temporary hearing loss, according to a small study in the Archives of Otolaryngology—Head & Neck Surgery.


Researchers in Belgium had 21 young adults with normal hearing listen to pop rock on an MP3 player at comfortable volumes for an hour, on six different occasions at least 2 days apart. The researchers found that, after listening, subjects experienced significant deterioration in hearing at high and low frequencies. Analyses revealed that hearing loss was temporary — participants recovered their normal hearing in between listening sessions.


The authors call for more research but say their findings "indicate the potential harmful effects" of listening to MP3 players.

Thursday, August 19, 2010

experimental data

Intention to Treat Analysis
P-value
Confidence Interval
Absolute Risk Reduction or Relative Risk Reduction
Number Need to Treat an Number Need to Harm
Odds Ratio


When looking at the results it is important to understand if an intention to treat analysis was applied. An intention to treat analysis is a statistical analysis of all the patients that started the trial, regardless of whether they were in the treatment or placebo group, or completed the trial. For instance, consideration should be given when there are 150 patients in the treatment group and 150 patients in the placebo group, but 50 patients in the treatment group drop-out half-way through the trial because of side effects. In this situation, it would be biased to look at just the 100 patients in the treatment group left at the end of trial and then artificially report a favourable benefit/risk profile for the treatment. However, when equal numbers of patients drop out in both groups, for known and unknown reasons, then the results are likely to be more valid. An intention to treat analysis, therefore maintains the integrity of the randomization process, reveals the similarities/differences between groups and attempts to provide a likely scenario of what would happen if this intervention was implemented in clinical practice.86

A p-value of ≤ 0.05 (or 1 divided by 20) means that if the trial was repeated 20 times that the results of 1 of those trials would probably be due to chance. A p-value of ≤ 0.05 is often acceptable and gives credence to the strength of the results but not the magnitude of treatment effect.

To understand the true treatment effect the confidence interval (CI) should be reported. The CI is expressed as a percentage of the likelihood that the treatment effect lays within a specified range. For instance, if 50% of the patients were likely to benefit from the treatment, and a 95% CI (40% to 60%) was reported, this means you can be confident 95% of the time that the treatment effect would lie within this range. A reporting of 50% is therefore a point estimate where it could be as low as 40% or as high 60% reasonable to expect a treatment effect of 50%. It is generally satisfactory to have a 95% CI with a tight range. However, any time a zero is reported in the interval range, you cannot assume the treatment effect to be true.85

The treatment effect can be reported as an absolute risk reduction (ARR) or relative risk reduction (RRR). For example, in a scenario where stroke occurred in 2% of patients (2 out of 100) given active treatment to lower blood pressure versus 4% of patients (4 out of 100) given placebo over a 5 year-period: a relative risk reduction of 50% (2% divided by 4%) or absolute risk reduction of 2% (4%-2%) could be reported. The magnitude of the RRR obviously looks much more significant than the ARR. However, what you want to know in this case is the number need to treat (NNT), which is 1 divided by the ARR. In this case you would get a NNT of 50 (1/.02) , which means you would have to give the blood pressure treatment to 50 patients for 5 years before you prevented 1 patient from having a stroke. The number need to harm (NNH) is calculated in the same way and tells you what the harm would be and how many patients you would have to give the treatment to before you saw 1 patient being harmed in that specific way. Obviously, you could then compare the NNT to NNH to understand the benefits versus the risks but the key is to have confidence in the results or a 95% CI or higher.86

It should be noted that in some instances the results are reported as an odds ratio (OR), which is comparable but not identical to the relative risk reduction. The OR is typically used in case control studies to assess risk of an adverse event or in cohort studies to assess disease outcome. The OR is calculated by dividing the number of occurrences in the intervention/treatment group by the number of occurrences in the control group. For instance, if an adverse effect of a drug is reported as an OR of 1.0 this means that the number of events in treatment group where equal to number of events in the placebo group. Similarly, if an adverse effect of drug is reported as an OR of 1.0, this means that the number of events in the treatment group were equal to number of events in the placebo group. Similarly, if an adverse effect of a drug is reported as an OR of greater than 1, this means that more individuals experienced an event in the treatment group compared to the placebo group. Therefore, an OR of less than 1 in this case means that the number of individuals in the treatment group that experienced an adverse effect was less than the placebo group.86

Taking this one step further, let’s say a particular treatment effect on reducing colds was reported as an OR of 0.42 with 95% CI of 0.25-0.71 and p<0.001. It is therefore accurate to say that the intervention group experienced a 58% (calculated as 1-0.42 X 100) reduction in catching a cold compared to the placebo group. However, regardless of whether the OR or RRR is used the CI and p-value should be reported in order for the trial outcome to be viewed as reliable. The NNT and NNH is not as easily calculated for OR because it requires a much more involved mathematical formula or can be estimated using tables.

Wednesday, June 2, 2010

cervical cancer and what do the experts know

The medically accepted paradigm, officially endorsed by the American Cancer Society and other organizations, is that a patient must have been infected with HPV to develop cervical cancer, and is hence viewed as a sexually transmitted disease, but most women infected with high risk HPV will not develop cervical cancer.[16] ANDThe widespread introduction of the Papanicolaou test, or Pap smear for cervical cancer screening has been credited with dramatically reducing the incidence and mortality of cervical cancer in developed countries.[8] BUT... Some 1100 clinicians (internists, family practitioners, or obstetrician-gynecologists) were given a series of clinical vignettes that described women by age, sexual experience, and Pap testing history. Participants provided their screening recommendations for each scenario.While over 80% said that at least one set of screening guidelines (e.g., U.S. Preventive Services Task Force) was "very influential" in their practices, only 22% recommended guideline-consistent care for every vignette. Obstetrician-gynecologists were less guideline-concordant than the other specialties.Of note, one third of participants recommended annual Pap testing for an 18-year-old who hadn't had sexual intercourse, while almost half continued to recommend Pap testing for a women whose cervix had been removed for benign reasons.
The most important risk factor in the development of cervical cancer is infection with a high-risk strain of human papillomavirus. The virus cancer link works by triggering alterations in the cells of the cervix, which can lead to the development of cervical intraepithelial neoplasia, which can lead to cancer. Women who have many sexual partners (or who have sex with men who had many other partners) have a greater risk.[10][11]

Saturday, May 22, 2010

pi theory

A theory can explain, predict and be used for technological applications and still be wrong. This is analogous to thinking pi as a rational number. 22/7 is a close approximate and may even be useful in many applications, BUT ITS WRONG. 339/108 is closer still but it too is wrong. Hence we can get extend a limit of reality ie get closer and closer but never get there.

Sunday, May 9, 2010

everything i know about thermodynamics

FIRST LAW Energy is conserved => Energy1 tranduced Energy2 E1=E2 SECOND LAW Energy= work + heat => w1 + q1= w2 + q2 AND Lord Kelvins statement "It is impossible to convert heat completely into work in a cyclic process" => That is, it is impossible to extract energy by heat from a high-temperature energy source and then convert all of the energy into work. At least some of the energy must be passed on to heat a low-temperature energy sink. Hence in our above example, w1 q2>q1 THIRD LAW Transduction is always happening (Transduce-To convert (energy) from one form to another) b/c absolute zero is not possible.

Sunday, April 11, 2010

CONFIDENCE INTERVALS ELUCIDATED

Posted by mreades under Uncategorized


A paper appeared in last weeks JAMA that I just now got around to reading, but had already read about in a number of other publications. The study looked at the effect early-in-life obesity has on death from heart disease decades later. The paper is a real treasure trove of information worthy of a longer, more comprehensive blog later on. For now I want to use it as an example of how statistics can be used to humbug the non-statistically inclined.

In brief the JAMA study was done with data pulled from the monster-sized Chicago Heart Association Detection Project in Industry that was begun in 1967. Subjects who were at least 31 years old were evaluated for a number of parameters including BMI, blood pressure, elevated cholesterol, and history of smoking. The researchers re-evaluated these subjects over the next several decades.

Researchers divided the subjects into five groups: low risk, moderate risk, intermediate risk, elevated risk, and highest risk. It’s not important to the point of this post to get into what specifically constituted these varying levels of risk, but in general, the greater the number or the more severe the risk factors, i.e., elevated cholesterol, high blood pressure, etc., the more high-risk the category. The researchers then divided the subjects into three other groups based solely on BMI: normal weight, overweight, and obese. Within each of these three weight-related groups were spread subjects with varying degrees of risk. In other words, the normal weight group contained subjects who were low risk, moderate risk, intermediate risk, elevated risk and highest risk as a function of their cholesterol levels, blood pressures, smoking history, etc. It was the same for all the groups. The obese group was composed of obese subjects who ranged from low-risk to highest risk. The object of the study was to follow these subjects for many years to see if obesity was truly a risk factor for death from heart disease or if obesity led to elevated cholesterol, high blood pressure, and all the rest, which in turn caused the heart disease mortality.

If the subjects in the normal weight, low-risk group had no more heart disease than those in the obese, low-risk group, then it could be inferred that obesity by itself may not cause heart disease. If, on the other hand, the subjects in the obese, low-risk group had a much higher death rate from heart disease than did those subjects in the normal weight, low-risk group, then at least some of the heart disease could be attributed to the excess fat.

(There is really much, much more under the surface of this paper worthy of exploration, but it will have to wait.)

So what did the study show? It depends upon where you get your information.

According to a statistical analysis of the data, the odds for death by heart disease in the obese, low-risk subjects was 1.43 times that of the normal weight, low-risk subjects implying an almost half again greater risk simply for being obese. And that’s how it was reported in the lay press.

WebMD allowed as to how

The researchers found that the risk of dying from heart disease was 43% higher for study participants who were obese but also met these qualifications for low cardiovascular risk than for normal-weight, low-risk participants.

It appears to be a pretty clear indictment against obesity.

But not if the statistics are analyzed correctly.

Before I get into that I want to produce a quote from Judge Samuel Alito that he uttered during his confirmation hearing before the Senate Judiciary Committee last week. Said he

Well, the analogy went to the issue of statistics and the use and misuse of statistics and the fact that statistics can be quite misleading. … And that’s what that was referring to. There’s a whole - I mean, statistics is a branch of mathematics, and there are ways to analyze statistics so that you draw sound conclusions from them and avoid erroneous conclusions from them.

Truer words were never spoken. But you’ve got to analyze the statistics, not take them at face value.

So let’s analyze the statistics used in our study under discussion to see if and how anyone went wrong.

Here is how the 1.43 ratio was written in the paper:

the odds ratio (95% confidence interval) for CHD [coronary heart disease] death for obese participants compared with those of normal weight in the same risk category was 1.43 (0.33-6.25).

What does this really mean?

First, it means that 143 people who were in the obese, low-risk group died from CHD for each 100 people who were in the normal weight, low-risk group, giving the risk ratio of 143/100 or 1.43. It seems reasonable that if that were really the finding, then the risk of dying if you are obese with low-risk (as these researchers define low risk) is 1.43 times greater than if you aren’t obese. Right? Not necessarily, and here’s why.

If you flip a coin 10,000 times the odds are that you will get about 5000 heads and 5000 tails since the odds are 50-50 of the coin landing on either side. But what about if you only flip it 40 times? Are you going to get exactly 20 heads and 20 tails? Probably not. What about if you only flip it six times? Will you get three and three? Maybe, but probably not. In fact I just flipped a coin 10 times and got five heads in a row, two tails, one head, and one tail, giving me seven heads and three tails. Now if this were a study on coin flipping I could confidently predict based on my data that if I flipped this same coin 10,000 times I would get 7000 heads and 3000 tails. But we all know this isn’t really true because the sample size I used (ten) was too small to be used to accurately predict the outcome for 10,000 flips. The fact that I went 7 heads and 3 tails came about strictly by chance, which plays a smaller and smaller role as the sample size gets larger.

Statisticians realize that virtually any outcome can be influenced by chance and have developed equations to quantify just how much chance is involved. One of the terms they have come up with (after some pretty complex mathematical maneuvers) is the confidence interval. Pretty much the gold standard for confidence intervals is what’s called the 95 percent confidence interval. What this means is that once a confidence interval has been established (a range between two numbers) you can be confident that your result will fall into that range 95 percent of the time. Since chance can’t be totally eliminated, there will still be a 5 percent chance that our result will fall outside the range.

Let me digress a little here to define what we mean by our result falling into or outside of this range. In our study the data showed that 43 percent more people died in the obese, low-risk group than in the normal weight, low-risk group. But so what? As callous as it seems, we don’t care about those people; they’re already dead. What we care about is how the data provided by all these dead people affects you and me and our loved ones and all the other people who aren’t dead yet. We all want to know if this 43 percent is just a chance finding like my flipping 7 out of 10 heads, in which case it’s meaningless, or do I really have a 43 percent greater chance of dying of heart disease if I’m obese even though I don’t have any other risk factors? Those are the questions the confidence interval addresses.

In this study the 95 percent confidence interval is (0.33-6.25). It’s usually stated like it is in this case 1.43 (0.33-6.25). This means that the risk ratio as applied to the population at large should come in 95 percent of the time between 0.33 and 6.25. So this means that the risk of dying if you are obese with no other risk factors could be anywhere from 1/3 as much to 6.25 times as much.

Say what? 1/3 as much?

Yep. Even though the middle of this range is around 1.43 the actual risk is just as likely to be 0.5, which would mean you have half the chance of dying as someone who is normal weight without risk factors. In other words, you would be better off being fat.

These numbers in the parenthesis are critical. If you’ve got a positive number in front of the parenthesis as we do with the 1.43, then you want to make sure that both numbers within the parenthesis are above 1. If the first number is below one that means that the risk is actually greater the other way, which negates your analysis.

In this case since the first number is less than 1, indicating that the risk ratio is meaningless and can be ignored. What it means in this specific case is that it makes no difference whether or not you’re overweight early in life as long as your other risk factors (as identified by the researchers in this study) are normal in terms of your risk of dying of heart disease later in life. Not the 43 percent that the authors and the press that picked the story up proclaimed. Too bad the authors and all the medical press people weren’t a little more statistically honest.

But to tell you the truth, I suspect that the authors of this paper (and I know that the medical writers) don’t have the same understanding of the confidence limits and what they really mean that you do after reading this post. Most researchers run their data through a computerized statistical program and simply look at the risk ratio (the 1.43 in this case) without really having a clue what the numbers inside the parenthesis mean.

I swear that over the next few weeks I’ll post in as simple a way as I can a basic (very basic) primer on statistics. If I don’t do this in a timely fashion you may write and cancel your subscription to this blog, and the unused portion of your subscription fee will be cheerfully returned.

energy use duration

muscle physiology

There are different components that can be trained/manipulated which contribute to maximal size. Of course there's actual myofibrillar hypertrophy (an increase in the size of the contractile fibers). There's also sarcoplasmic hypertrophy (an increase in non-contractile components of the muscle such as glycogen, water, minerals, mitochondria, etc). Capillary density can also be improved (increasing nutrient availability to muscle fibers). There are three major types of muscle fibers: Type I (or slow oxidative), Type IIa (fast oxidative/glycolytic) and Type IIx (fast glycolytic). The old Type IIb fibers turn out only to exist in animal models; IIx describes the highest threshold fibers in humans. Each fiber type has a distinctive physiology in terms of force and growth capability, fatigueability, etc. Type I fibers have the lowest force output and growth potential and take the longest to fatigue and Type II fibers have a higher force output and growth capacity and fatigue more quickly with Type IIa being intermediate between Type I and Type IIx. We might simplistically look at the rep schemes of holistic training as hitting a given pool of motor units: sets of 4-6 for Type IIx, sets of 12-15 for Type IIa and sets of 40 for Type I. This isn't necessarily incorrect although it goes a little beyond that.

Dr. Hatfield may have been one of the first Americans to latch onto the idea that there were different components of a muscle that contributed to muscle growth. This goes along with the European idea of myofibrillar vs. sarcoplasmic hypertrophy (this topic is discussed in greater detail in my UD2 book, I am the mother-f*****g PIMP). Myofibrillar hypertrophy refers to growth of the actual contractile component of the muscle fiber while sarcoplasmic hypertrophy refers to growth of everything else: glycogen, water, minerals, mitochondria and capillaries. The key thing to note is that each component requires a differential type of stress to stimulate growth.

infinite distant in finite time


Consider a machine which can travel 1 mile in 60 minutes, then 1 mile in 30 minutes then 1 mile in 15 minutes, then 1 mile in 7 1/2 minutes etc. The sum of 1 +1/2 +1/4+1/8...=2

thermodynamics

FIRST LAW=> Energy=work + heat=> energy is conserved. SECOND LAW=>This rxn cannot produce 100% work=> IE HEAT/WORK RATIO > ZERO but the lower this number the more efficient the reaction. THIRD LAW=>this equation always flows left to right=> ALWAYS. Hence wrt to work, you cannot win, you cannot even break even and lastly you must play.

negative gibbs free energy = rxn happens

Predicting Spontaneity: How Its Done

Predicting the Spontaneity of a Reaction can be a tricky business. On the surface you would think that the Change in Enthalpy (Delta H rx ) would predict whether a reaction is spontaneous or not simply by applying the First Law of Thermodynamics. If Delta H were negative that would mean that the chemical system would lose energy to the surroundings and that should make the products more stable. According to the First Law all systems tend toward lower (more stable) energy state. However the first law considers the total energy of the system. It doesn't take into account that maybe some of this energy change would involve an entropy increase big enough to make a reaction that might be predicted to be non-spontaneous to actually be spontaneous. Indeed there are some reactions that are spontaneous at low temperatures whereas others are spontaneous at only high temperatures. Therefore Delta H is not a fool proof way of predicting spontaneity.

Others might consider using the Change in Entropy of a reaction system to be a good indicator or predictor of spontaneity. According to the Second Law systems tend toward higher entropy so it might be concluded that if the change in Entropy is positive this would indicate spontaneity. The problem with that is that the second law is referring to "isolated" systems that have no input from the outside environment. If that happens and it is more than likely it will, then Delta S would not always predict spontaneity. There are systems that have a negative Delta S such as The precipitation of a slightly soluble salt from a chemical reaction which occurs spontaneously. What happens when you pour a solution of NaCl together with a solution of Silver Nitrate?

So what will be a good predictor? A good predictor of spontaneity would be one that considers the Delta H, the Delta S, and the temperature since there are some reactions as noted above that seem to respond to changes of temperature even though we find the Delta H and Delta S would be approximately the same at any temperature.

Gibbs-Helmholtz Equation



Professor Gibbs and Helmholtz came up with a relationship that took all three of these factors into consideration. This is known as the Gibbs Helmholtz Equation:

Delta G rx = Delta H rx - T(Delta S rx )

Let's identify each term in this very important equation.


The Delta H rx represents the total energy exchange that takes place between the system and its environment.
The T(Delta S rx ) term represents the energy eused to take care of the intermolecular activity. This is wasted energy and has to be subtracted from the total energy. An analogy can be drawn here between a mechanical engine and a chemical reaction (engine). When a mechanical engine performs useful work not all of the energy output of the engine goes toward that end. Some of the energy output is wasted due to friction of moving parts. That is why we never have a perpetual motion engine. The friction of the moving parts will be wasted energy and must be subtracted from the total energy output to get the net useful energy capable of performing a task or work.
Delta G represents that net useful energy of a chemical system (engine). Due to the fact that molecules in motion will rub up against each other more or less depending on how independent they are of one another (the higher the entropy the more independent they are). This molecular friction (T Delta S) will siphon off from the total energy (Delta H) to result in net energy capable of performing a task (Delta G).

If Delta G is negative then the net useful energy will be released to perform work. This will occur spontaneously. On the other hand if Delta G is positive then the net energy will have to be absorbed from the environment and therefore the reaction cannot occur unless something is done on the outside.
A third possibility is if Delta G was equal to zero. At that value the system is in a state of equilibrium. If one knows the Delta H and Delta S then one can determine the equilibrium temperature for the system by setting Delta G equal to 0 and solving for T. This equilibrium temperature would be the break even point for systems that might be positive or negative depending upon the temperature.

EXCESS SUGAR STOPS SHBG PRODUCTION

Too Much Sugar Turns Off Gene That Controls Effects Of Sex Steroids
SHBG IS BELIEVED TO BE A TRANSPORTER OF STEROIDS IN THE BLOOD, IT IS NOT VERY ACTIVE. TESTOSTERONE IS EITHER FREE (1 TO 2 %) , LOOSELY BOUND TO ALBUMIN-THIS IS AVAILABLE TESTOSTERONE THAT IS USABLE IE BIOAVAILABLE- (~59%) AND THE REST IN BOUND TO SHBG W/C IS UNUSABLE-NON BIOAVAILABLE. ScienceDaily (Nov. 21, 2007) — Eating too much fructose and glucose can turn off the gene that regulates the levels of active testosterone and estrogen in the body, shows a new study in mice and human cell cultures that’s published this month in the Journal of Clinical Investigation. This discovery reinforces public health advice to eat complex carbohydrates and avoid sugar. Table sugar is made of glucose and fructose, while fructose is also commonly used in sweetened beverages, syrups, and low-fat food products. Estimates suggest North Americans consume 33 kg of refined sugar and an additional 20 kg of high fructose corn syrup per person per year.

Glucose and fructose are metabolized in the liver. When there’s too much sugar in the diet, the liver converts it to lipid. Using a mouse model and human liver cell cultures, the scientists discovered that the increased production of lipid shut down a gene called SHBG (sex hormone binding globulin), reducing the amount of SHBG protein in the blood. SHBG protein plays a key role in controlling the amount of testosterone and estrogen that’s available throughout the body. If there’s less SHBG protein, then more testosterone and estrogen will be released throughout the body, which is associated with an increased risk of acne, infertility, polycystic ovaries, and uterine cancer in overweight women. Abnormal amounts of SHBG also disturb the delicate balance between estrogen and testosterone, which is associated with the development of cardiovascular disease, especially in women.

“We discovered that low levels of SHBG in a person’s blood means the liver’s metabolic state is out of whack – because of inappropriate diet or something that’s inherently wrong with the liver – long before there are any disease symptoms,” says Dr. Geoffrey Hammond, the study’s principal investigator, scientific director of the Child & Family Research Institute in Vancouver, Canada, and professor in the Department of Obstetrics & Gynecology at the University of British Columbia.

“With this new understanding, we can now use SHBG as a biomarker for monitoring liver function well before symptoms arise,” says Dr Hammond, who is a Tier 1 Canada Research Chair in Reproductive Health. “We can also use it for determining the effectiveness of dietary interventions and drugs aimed at improving the liver’s metabolic state.”

Physicians have traditionally measured SHBG in the blood to determine a patient’s amount of free testosterone, which is key information for diagnosing hormonal disorders. In addition, SHBG levels are used to indicate an individual’s risk for developing type 2 diabetes and cardiovascular disease.

The discovery dispels the earlier assumption that too much insulin reduces SHBG, a view which arose from the observation that overweight, pre-diabetic individuals have high levels of insulin and low levels of SHBG. This new study proves that insulin is not to blame and that it’s actually the liver’s metabolism of sugar that counts.

This research was supported by a grant from the Canadian Institutes of Health Research, the Michael Smith Foundation for Health Research, and the BC Children’s Hospital Foundation.

surface area of sphere in a square cylinder are equal


Explanation:
The surface area of a sphere is the same as the lateral surface area of a cylinder with the same radius and a height of 2r. The area of this cylinder is 2πrh = 2πr(2r) = 4πr 2 . To see why the areas are the same, first imagine the sphere sitting snugly inside the cylinder. Then imagine cutting both the sphere and cylinder at any height with two closely spaced planes which are parallel to the bases of the cylinder. Between the planes is a ring shaped strip of the cylinder's surface and a ring shaped strip of the sphere's surface. Both strips have the same area! Why? The strip of the sphere has a smaller radius but a larger width than the strip of cylinder, and these two factors exactly cancel, if the strips are very narrow. The diagram shows two right triangles. The long side of the big triangle is a radius of the sphere and the long side of the small triangle is tangent to the sphere, so the triangles meet at a right angle. You can see that the triangles are similar. This means the ratio of the radii of the two strips (r/R) is the same as the ratio of the widths of the two strips (d/D). So the areas of the strips are equal (2πRd = 2πrD). Since every strip of the sphere has the same area as the corresponding strip of the cylinder, then the area of the whole sphere is the same as the lateral area of the whole cylinder ME- The triangles are similar b/c marked angles =90-x in both cases hwhere x is the Rr angle=> by the angle, angle theorem 2) the sliced sphere area equals 2pi rD and the sliced cylinder= 2pi R d as u can see from the diagram 3) since D/d=R/r=> rD=Rd and if we put this into 2) equation we prove the surface areas are equal! ie Q.E.D.

acne

The liver can store about 5% of its weight (~50-75 gram) of sugar as glycogen in w/c any more it'll convert the sugar to fat. WHEN it starts to convert the sugar into fat, it stops making SEX HORMONE BINDING GLOBULIN (SHBG) hence SHBG in the blood decreases. SHBG protein plays a key role in controlling the amount of testosterone and estrogen that’s available throughout the body. If there’s less SHBG protein, then more testosterone and estrogen will be released throughout the body, which is associated with an increased risk of acne.
Observational evidence suggests that dietary glycemic load may be one environmental factor contributing to the variation in acne prevalence worldwide. To investigate the effect of a low glycemic load (LGL) diet on endocrine aspects of acne vulgaris, 12 male acne sufferers (17.0 +/- 0.4 years) completed a parallel, controlled feeding trial involving a 7-day admission to a housing facility. Subjects consumed either an LGL diet (n = 7; 25% energy from protein and 45% from carbohydrates) or a high glycemic load (HGL) diet (n = 5; 15% energy from protein, 55% energy from carbohydrate). Study outcomes included changes in the homeostasis model assessment of insulin resistance (HOMA-IR), sex hormone binding globulin (SHBG), free androgen index (FAI), insulin-like growth factor-I (IGF-I), and its binding proteins (IGFBP-I and IGFBP-3). Changes in HOMA-IR were significantly different between groups at day 7 (-0.57 for LGL vs. 0.14 for HGL, p = 0.03). SHBG levels decreased significantly from baseline in the HGL group (p = 0.03), while IGFBP-I and IGFBP-3 significantly increased (p = 0.03 and 0.03, respectively) in the LGL group. These results suggest that increases in dietary glycemic load may augment the biological activity of sex hormones and IGF-I, suggesting that these diets may aggravate potential factors involved in acne development.

PMID: 18496812 [PubMed - indexed for MEDLINE]
Higher free testosterone levels in women are a function of lower levels of sex hormone-binding globulins (SHBG), higher levels of total testosterone, or both. When free testosterone levels are decreased, sebum production, a pathogenic feature of acne vulgaris, is also decreased. Oral contraceptives (OCs) decrease free testosterone levels by reducing testosterone production by the ovaries and adrenal glands, increasing SHBG, and inhibiting conversion of free testosterone to dihydrotestosterone. Studies have shown that the progestin component of OCs lowers androgen levels, which are directly associated with the development of acne lesions. Currently, 3 OCs have received approval for acne from the US Food and Drug Administration. For patients with acne who are already benefiting from OC treatment, there is no need to change the OC; however, when an OC proves insufficient against sebum production, switching to a formulation that is approved for acne is recommended.

PMID: 18338652 [PubMed - indexed for MEDLINE]

Diffraction limits

Longer wavelengths go through objects. Consider the bass sound in a cranked stereo. We can't hear the higher pitches because the higher shorter wave pitches don't escape the enclosed car. The longer deeper bass does. Similarly with light, the shorter the wavelength the more the reflection. Electron microscopes emit shorter waves then light hence we get more reflection. Diffraction limit => if an object is less then about 1 wavelength then the object is not "seeable" with that wavelength. This size is called the diffraction limit.