2.5 % to be as lucky as we are

Today our son Simon is 11 years old and we are celebrating our 24th wedding anniversary. The journey of us becoming parents of two amazing kids is not easy.

20160328_075646

The first pregnancy happened a few months after marriage. It was not planned or desired, we even considered abortion. I miscarried at 7-8 weeks. It was not a big deal at that time, shit happens. Being 19/21 years old we naively thought we had a universe of time ahead of us.

Memories of the 2nd and 3rd pregnancy are very blurry. One was completed by miscarrying at 9-10 weeks. For the 3rd one, stopped heartbeat on ultrasound at 12-13 weeks after 2-3 weeks of bed rest in the hospital.

So after third unsuccessful pregnancy and some very minor testing, doctors in Kiev discovered that I was Rh negative and told me not to get pregnant for 3 years. Nobody new about Rhogam shot in our medicine at that time, I guess.

Fourth pregnancy happened after we moved to US. Everything was seemingly under control. Rhogam was shot. Still I miscarried again at 8-9 weeks.

For the fifth pregnancy we went to infertility clinic in Akron, OH and have done lots and lots of testing. Eureka! It turned out that I am a carrier of balanced chromosomal translocation. Genetic testing showed 22% chance of conceiving fetus without it or being a carrier as I am. One more (fifth) naturally conceived pregnancy ended in miscarriage again and fetus was tested positive for unbalanced translocation.

For sixth pregnancy in 2001, we decided to take real measures and planned in-vitro fertilization with preimplantation genetic diagnosis (PGD). At that time, there were 3 places in the country doing PGD. Yuri Verlinsky at Reproductive Genetics Institute in Chicago was a pioneer of this procedure. Through stimulation, I was quite fruitful in producing 19 eggs. They were fertilized and given to us in a thermobox, which we put it in the back of our car and went to Chicago for PGD test. After anxious waiting for 3 days, the results came in – 3 out of 19 embryos were tested to be viable for implantation. Two were like me – carriers of balanced translocation and one was completely clear. Based on “experiment”, a chance of me conceiving a normal pregnancy was 15%. We took those viable embryos back in the box to Ohio for implantation. One out them did not make it until 5th day of implantation. While the two were transferred and one got implanted and resulted in successful pregnancy.

We could not believe our happiness. 14 years ago, when I was into 5th months of pregnancy in January of 2002 we moved to Albuquerque. At UNM hospital our new doctor could not believe our story. During detailed ultrasound, one of the measurements was within a norm, but on the higher end of the allowable range and therefore could be considered as weak indicator for small chance of Down syndrome. Amniocentesis was scheduled. In 2 days after amnio we received good news – no Down syndrome. In one week after amnio I came for a regular scheduled appointment and left with a heartbreaking news – fetus did not have heartbeat. Considered risk of amnio is 1:400 to 1:200. And I was the “lucky”, carrying 15%-probable (or even 5% if we count 1 successfully implanted out of 19 embryos) baby, one out of 200-400.

Two years later we decided to try one more time natural pregnancy. If not, we were thinking to adopt. This pregnancy was not welcomed at all. I was scared by the prospect of doctor visits, anticipation of bad news, D&C, etc. And 9 months later our first miracle child, Simon, was born through C-section. It was the present to our 13th wedding anniversary on March 28, 2005. It seemed that our family had to move into teen ages of the marriage to deserve a child. You can’t blame me for loving number 7 ever since. Seventh pregnancy gave us what we stopped dreaming for.

Here it was supposed to be an end of the story how we became luckiest people ever. But it is not the end.

Few years had passed and we became serious again about adopting 2nd child. But then I decided to try one more time a natural pregnancy. It was beyond naive. Chance of successfully drawing 3 out of 19 twice in a row is 2.5%. And 4 years later, in 2009, our 2nd miracle child daughter Paulina was born through naturally conceived and carried o term eights pregnancy. You can’t blame me for loving number 8 ever since.

As my mom said “maybe you don’t have translocation anymore!”
I just think that the chance of being as lucky as we are is 2.5%! And we’re living it!

Advertisements

Different kind of map. Map of road deaths

The UN has launched a ‘decade for action’ to tackle road traffic accidents, which kill more people around the world than malaria, and are the leading cause of death for young people – especially in developing countries.

Visualizing the most recent data on traffic deaths and injuries, from the 2009 Global status report on road safety by PCA was my interest. I’ve used a subset of countries where all of the data were available and make the “statistical map” less cluttered by small countries.

Map show countries (green squares) and statistics (red diamonds). The closer countries to each other on the map, the more similar they are in whatever parameters describing them. In this case those are # of deaths, % of each type of death, GNI, etc. The closer those parameters to group of countries the more significant they are (larger values) for that group. For example, Russia, Iran,  Chile and South Africa have largest # of death per capita and % of pedestrians killed (two red diamonds that are closest to this group of countries).

The resulting map (biplot of Principal components) speaks for itself. Majority of road death are pedestrians, with cyclist and bicyclists following behind in poor countries. Developed countries have more of vehicles and larger % of death in car accidents. Japan having largest fleet has small number of death in cars, quite interestingly.  Netherlands, not surprisingly  having so  much bicyclists  stands away from the rest of Europe and other developed countries with having larger % of death of bicyclists.

Optimization of ink composition based on a non-platinum cathode for single membrane electrode assembly proton exchange membrane fuel cells

The paper on research we did long time ago is out.

XPS structural information is correlated with electrochemical performance in fuel cell and stability by Principal Component Analysis.

Non-Pt based oxygen reduction catalyst fuel cell performance is reported for various electrode compositions. Ink formulations for pyrolyzed Co porphyrin based cathode electrocatalysts were evaluated in a membrane electrode assembly (MEA) configuration and X-ray photoelectron spectroscopy was performed on the MEA catalyst layers. The effect of cooling time trajectories of the catalysts after pyrolysis as well as Nafion content in the ink formulation were studied. By building statistical structure-to-property relationships between XPS and MEA performance using multivariate analysis we have determined that the higher stability of fast-cooled containing inks is mainly associated with better preserved graphic carbon from the carbon black and C–F moieties of the Nafion, while better MEA performance is a result of the presence of these moieties as well as pyridinic nitrogen and nitrogen associated with metal in the pyropolymer. Optimal Nafion content is determined at 1:1 catalyst:Nafion weight ratio, while higher Nafion concentrations causes oxidation of the Nafion backbone itself as well as leaching of the CoxOy particles from the catalyst and formation of oxidized species of Co, O, C and F.

What data behind “change in trust in science” really show

Post by Razib Khan made me wanna look at the data behind the questionable change in trust in science from 1998 to 2008 a bit more.

The dilemma whether trust in science vs. religion was impacted by the “broadsides against religion” was approached by asking responders whether they agree with this statement: “We trust too much in science and not enough in religious faith.”  The responses were:

– Strongly agree
– Agree
– Not agree or disagree
– Disagree
– Strongly disagree

The data are right here:

Looking at these data, Razib made very reasonable conclusion “don’t see much difference”.

I could not pass an opportunity to apply principal component analysis to this table above.

The biplot below shows both responses and demographic categories.

Demographic responses in 1998 are shown in green and those in 2008 are in blue. Arrows  are connecting the same category of demographic between two different years. It is clearly that there is no change in total and majority of individual categories.

However, there are 3 peculiarities that caught my attention. There are three red arrows on the plot showing quite significant change. That’s why I love PCA – easy way to visualize data with multiple variables, and the data are still there for us to explore (some think that PCA is a black magic that eats all the data away) !

So back to original data now. Changes in those with “none” religious preferences and “liberal” political views are quite similar (such overlap between these groups is not suprising), in which big part of people who were uncertain (‘neither‘) have transitioned into a group of “disagree“. For “independent” class,  responses in all categories changed except those in “agree” group. (Interesting observation by itself, that indicates the fluid unpredictible character of independent voters?)   Big part of “strongly agreeing”  and “neither” is lost (from 14% to 5% and from 34% to 28% respectively) while  “disagreeing” % grew from 22% to 36%.

To sum up, careful analysis of data shows that in all three categories of responders with largest changes from 1998 to 2008,  the group supporting science grew (“disagree and “strongly disagree” categories of responses) . The major source of this growth seems to be from the pool of those with neutral opinion (‘neither“) except independent for which large % of those that “strongly agree” also switched to “disagree“.

So, I am confused.. People in conservative and religious groups that would be affected the way Robert Wright hypotheses did not show changes in the way they view trust in science vs religion. At the same time, more people  from liberal groups disagree with the statement indicating that they trust science more than before. How exactly this is a sign of weaking trust in science?

By the way, I find the statement  to be  pretty confusing way to ask such a straightforward question…

Only one report of QSPR modeling of electrocatalysts has been published… Sad and glad

Very important review on “Quantitative StructureProperty Relationship Modeling of Diverse Materials Properties” has came out from Australian group in Chemical Reviews.

A quote:

“Only one report of QSPR modeling of electrocatalysts has been published. The  electrochemical performance of six samples of nonplatinum porphyrin-based catalysts of oxygen reduction was predicted based on 24 XPS spectral variables and electrochemical measurements. The combination of genetic algorithm and multiple linear regression generated a model that had excellent predictivity for the training set and good cross-validation performance. However, the imbalance between the small data set size and number of descriptors risks overfitting the QSPR model.”

Note to myself: make students burn more samples!

 

Is fast, label-free detection of viruses, toxins or even DNA fragments possible in nanochannels?

Fluids confined in nanometer-sized structures exhibit physical behaviors not observed in larger structures, such as those of micrometer dimensions and above, because the characteristic physical scaling lengths of the fluid very closely coincide with the dimensions of the nanostructure itself. For example, confinement of molecular transport in fluidic channels with transport-limiting pore sizes of nanoscopic dimensions gives rise to unique molecular separation capabilities. Such nanofluidic structures are used widely for separating fluids with disparate characteristics. Development of bio-nanofluidic technology for chip-based analysis systems gives possibility of investigating DNA behavior at the single-molecule level.

Various molecular separation techniques such as nanochannel electrophoresis, microchannel capillary electrophoresis and gel electrophoresis rely on difference between velocities of movement of molecules due to various sizes, charges or combination of those. After sufficient time has passed, clearly visible bands of separated molecules are observed using various possible detection schemes.

Think about two fishes moving down the stream. If they look identical and weigh the same, how do we know if any of them have eaten another small fish for dinner? And that is very important question, believe me! The only way to answer it is to take your stopwatch and wait for both fishes to swim down the stream far enough so that the difference in velocities becomes apparent depending on time sensitivity of your stopwatch.

In this analogy fish is antibody, and the dinner is antigen in biochemical world. If there is no antigen present, single band of antibody will be moving down the nanofluidic channel with the velocity v1. When antigen is present, however, part of the antibody will form antibody-antigen complex and will be moving with  a slower velocity v2, while the rest of antibody will be left unbound and will be travelling with the same velocity v1. As time passes, this will result in two clear bands separated along the distance of the separation platform.

 

 

 

 

 

 

 

 

 

If two of the molecules separated are fluorescently labeled with different dyes they can be images by fluorescent microscopy. This is shown in the example below where model receptor/toxin system has been separated by Capillary Electrophoresis.

Green-labeled GM1  is forming a complex with red-labeled CTB and moves slower than excess of unbound GM1. Clear green band of GM1 is followed by orange band of the complexed receptor/toxin mixture confirming the presence of toxin in the system.

This is a main principle of using separation assays for detection purposes of various viruses, toxins, etc. The problem with all these detection systems that they often  must involve labeling of analytes and binding agents with dyes, and sometimes it may take long time to see clearly separated bands to be certain that the analyte is present. Two different flow velocities as shown in example with fishes are either obvious from visual analysis of images (observing clear separate bands) or can be determined by manual calculations from images as a function of time of separation, which is tedious, time-consuming and quite subjective process dependent on the analyst doing the calculations.

This is where patent “Method for multivariate analysis of confocal temporal image sequences for velocity estimation” comes handy. It allows identifying whether there are two flow velocities present in images acquired as a function of time of separation from the way intensities of images themselves change with time.

  • First very important benefit of this methodology that this can be done at the very beginning of experiment when no clearly visible separation is present with as few as first four acquired images.
  • And the second benefit is that no labeling of molecular species is necessary as presence of two flow velocities can be determined from gray scale intensity of image at either Green or Red or overall RGB image converted to grayscale.

We have shown this in “Detecting molecular separation in nano-fluidic channels through velocity analysis of temporal image sequences by multivariate curve resolution” published in MICROFLUIDICS AND NANOFLUIDICS journal. 

Visualizing Life Satisfaction data by Multivariate Analysis

This week OECD relaunched their Better Life Index this week and provided data behind it.

I’ve applied multivariate statistical data analysis methods to average value and you can see results below. Quite interesting groups of countries had emerged.

X-axis separates countries by those with high Life satisfaction index vs those with low. Y axis separates countries by job availability.

  • The most satisfied group of countries is within top right quadrant, having all highly developed countries.
  • The least satisfied group of countries is in the bottom left quadrant of the plot with unemployment being the major factor contributing into their unsatisfaction. This is highest for Eastern European block which experienced economical difficulties in recent years.
  • Countries at top left quadrant are less happy  than  those in the top right quadrant, but not by much. The major factors are high level of crime and long hard working hours. The least satisfied in this group is Turkey (farthest on the plot from Life Satisfaction Index).  Interestingly, Israel has one of the highest wealth and health indicators, lowest crime but at the same time long working hours and worse housing conditions.
  • Countries in the center of the plot is where all indicators balance out. Level of life satisfaction for this group is in between the worse groups. It balanced out by not very high “positive” indicators such as wealth and health and not very high “negative” indicators such as unemployment and crime.
  • Education does not seem to affect life satisfaction as much as other parameters. It is lowest for the group of countries in the left top quadrant and highest for the group in the right top quadrant but both these groups are quite satisfied with life.
Analysis of women and men values separately will be done soon as well.
PLSDA, PLS_Toolbox 6. in Matlab was used with autoscaling options for processing
Original data used for analysis:
Tagged , ,

Art of curve-fitting… or black magic of curve-fitting XPS spectra

Why talk about something so well known to the surface analysis community as curve fitting high resolution XPS spectra? Two reasons.

First is the skepticism I am running into every time I am showing curve fits of spectra to non-surface analysis communities of scientists. Their reaction is that curve fitting is meaningless as we can curve fit any particular spectrum with infinite number of combinations of peaks of different widths and shapes with the same goodness of fit. So every time I give presentation, which show curve fitting results to people who are not doing it for living, I am talking about physical reasons behind Gaussian-Lorentzian shape of peaks, fundamental limits contributing to FWHM and generals rules of accurate reproducible curve fitting.

And the second reason is that even though we all know the rules behind good practices of curve fitting scientific literature is filled with poorly fitted spectra and, therefore, incorrectly interpreted XPS data.

What’s wrong with set of spectra shown on the left. And why spectra on the right represent the “correct” way of curve fitting?

Image

Let’s look at one example from the literature. Three N 1s spectra for three samples (unmodified and heated at two different temperatures) are analyzed.

Image

The first sample is unpyrolyzed sample (unmodified)  which is used as a reference. Peak NI is of adequate width for N 1s spectral line. Why, suddenly, peak NII is twice as wide as peak NI?

Image

Heat treatment of sample at 800C introduces changes which are obvious from the spectral shape. Peak NI is at the same position of Binding energy and has the same approximate width. Suddenly peak NII is twice more narrow then it is in unpyrolyzed sample. Peaks NIII and NIV is added to complete a curve fit. Curve fit of this spectrum by itself meets all necessary requiremenets of a good curve-fit in which main three peaks are of approximately the same FWHM. So why there is no peak NIII in unpyrolyzed sample? If one makes peak NII in unpyrolyzed sample of adequate width (being the same as in sample at 800 C), then a third peak NIII must be created to complete a curve fit in the unpyrolyzed sample as well.

Image

At temperature 900 peak NI becomes twice as wide as it is in unpyrolyzed  sample. And peak NII becomes three times more narrow then in unpyrolyzed sample. And, my “intelligent” guess that authors make conclusion on significant decrease of species contributing to binding energy of peak NII from 800 to 900C.

This type of curve fit is exactly why there is so much reservations against use of curve fitting of spectra for quantitative evaluations of changes in chemistries.

And still itt is very easy to do an accurate reproducible curve fit if you remember a few things.

1. Fundamental processes contributing into width of the peak used for curve fitting N 1s, in this case, are the same for all peaks used, i.e. natural width of incident X-ray line,  thermal broadening, pass energy of the analyzer, lifetime of the electron hole, etc. From a reference sample with just one type of N, O or metal, for example, analyzed at your particular instrument it is easy to find what it the adequate width for particular line of the element. ALL peaks within this element should have +/- 0.2 eV the same FWHM. 

2. One of the basis of adequate interpretation of changes introduced by any type of modification is identification of peaks in unmodified sample. Once the spectra for reference sample are curve fitted, position in BE and FWHM have to be constrained to +/- 0.2 eV each. This curve fit can be then copied into a curve fit of spectra from all other samples in the series. If the set of peaks present in unmodified (reference) sample is not sufficient to complete a curve fit, new peaks of the same FWHM have to be added.

3. If you don’t have reference sample, any sample in a series can be used as “reference” for curve-fit. Then this curve fit can be propagated to all other samples for accurate within samples comparison of changes. CONSTRAINTS, CONSTRAINTS AND CONSTRAINTS! 

4. Cross-correlations of the elements is key! If you have identified, lets say, C-N=O in N 1s spectrum, there should be peak due to the same type of chemistry in both C 1s and O 1s spectra.

These little things will allow you to be as consistent as possible throughout your sample set and make sound conclusions on chemistry.

It is critical to publish and present high quality XPS data processing of spectra to ensure all the trust this powerful method deserves. Let’s do it!

Tagged , ,

History of photoelectron spectroscopy

Image

1887: Heinrich Hertz published, “On an effect of UV light upon the electric discharge” (Sitzungsber d. Berl. Akad. d. Wiss., June 9, 1887).

1895: Discovery of X-rays by W.K. Röntgen.

1897: J.J. Thomson’s cathode ray tube experiments for measuring e/m of electrons: a primitive electron spectrometer.

1905: Einstein equation for the photoelectric effect :eV = hυ − φ.

1907: Innes, a Ph.D. student, conducted research on: “….the velocity of the cathode particles emitted by various metals under the influence of Röntgen rays….” (Proc. Roy. Sec.. Ser. A 79, 442(1907)). A photographic plate was used to measure the deflection of photoelectrons in a magnetic field.

1918: First XPS paper by a Harvard University researcher, Mang-Fuh Hu, reported, “some preliminary results in a determination of the maximum emission velocity of the photoelectrons from metals at X-ray frequencies” (Phys. Rev. 11, 505(1918)).

1925: H. Robinson, a pioneer who devoted his entire research career to XPS, wrote that, “…an accurate knowledge of the energies associated with the different electronic orbits within the atoms is essential to the further development of the theory of atomic structure” (Proc. Roy. Sec., Ser. A, 104, 455(1923)).

1950: R.G. Steinhardt Jr. published his PhD thesis, “An X-ray photoelectron spectrometer for chemical analysis” (Leihigh University). He was also the first to recognize that “X-ray photoelectron spectra are profoundly influenced by the chemical and physical nature of the surface under investigation” (Anal. Chem. 25, 697(1953)).

1954: Kai Siegbahn built his high resolution photoelectron spectrometer, and subsequently established XPS as an important research and analysis tool. (Figure 3.2.2 from K. Siegbahn, C. Nordling, A. Fahlman, R. Nordberg, K. Hamrin, J. Hedman, G. Johnsson, T. Bergmark, S. E. Karlsson, I. Lindgren, and B. Lindberg, Nova Acta Regiae Soc. Sci. Ups. 20, 7 (1967).)

In 1981: Kai M. Siegbahn was awarded the Nobel Prize for “his contribution to the development of high-resolution electron spectroscopy”. (Nobel Lectures in physics (1981-1990), World Scientific Publishing Co. Pte. Ltd 1993)

Information is used from http://www.phy.cuhk.edu.hk/course/surfacesci/mod3/m3_s1.pdf

Tagged , ,

Calibrating XPS spectra. Trivial thing?

This is a schoolbook task. Everybody dealing with XPS spectra knows that if charge neutralizer is used to compensate for charging effect for not fully conductive samples, that causes spectra to shift to lower BE. In order to correct for that spectra must be calibrated/charge corrected/shifted – whatever the word you’re using to describe this procedure.

Adventitious carbon contamination is a vice but its virtue is that all samples (almost) have it so we can reliably use it for calibrating spectra for lots and lots of materials. Its position is assumed to be 284.8 eV. Put maximum of C 1s spectra to that position and use the same shift for all other spectra from that position for that sample and voila!

But what if carbon is a major part of the material you’re designing or studying? Is it mainly graphitic (284.4 eV), aromatic (284.7 eV), aliphatic (285 eV)? Are there lots of surface oxides causing big secondary shift of C at 285.5 eV? How can you reliably use C as internal standard when you may know nothing about the carbon itself? Putting maximum of a carbon peak at 284.8 eV (assumed to be representative of internal hydrocarbon), how sure are we that we are not calibrating all of the spectra by graphitic or secondary carbon? The difference of 0.4-0.6 eV may not seem significant but we can not claim to being able to resolve peaks as close as 0.2-0.3 eV and accurately identify them at such accuracy of calibration, can we?

Image

If we put Au or Ag reference material (which are available as paints, pens or powders) onto each sample individually, we can see the effect. Figure shows that for bipyridine the difference between using C and Au for calibration is small, being only 0.2 eV.

However, for another sample shown, CoTMPP,  the difference in calibration is 1eV. So if we would’ve used C (and we did) we would incorrectly identified types of Co and N and O present within the sample.

The purpose of surface analysis of most functional advanced materials is to be able to correlate surface chemistry to whatever parameter of performance or of interest. In this particular example state of N is of critical importance in understanding what is the structure of active site in the electrocatalysts. Pyrrolic N and pyridinic N are among the suspected species responsible for oxygen reduction. And, coincidentally, the difference in BE between N in pyridine and pyrrole environment is … Yes, you guessed it right – 1 eV – exactly as we have found the difference between calibrating by Au and by C. So, if we would’ve used C, we would’ve found that majority of N is in pyrrolic state and if we would’ve used Au, we would conclude that it is pyridinic that is the main type of N.

Now wonder that out of 35+ manuscripts reviewed, N speciation in pyropolymers or fuel coals as determined by XPS shows huge spread of reported values for all major types of N.

Image

Is it because all of them used carbon as internal standard for calibrating their spectra?

I think you know the answer…

Tagged , ,

Shifting gears.

Image

After dramatic lifting of the 1 million XPS spectrometer on the 2nd floor of a building with no freight elevator I will try posting notes and thoughts on vacuum science, data analysis, image processing – whatever research subjects occupies my mind. 

2.5 % chance to be as lucky

This coming March me and my husband will be celebrating our 20 wedding anniversary. Our child could’ve been 19 years old now. Could’ve. But didn’t.

Back to 20 years ago in Kiev, Ukraine. The first pregnancy happened a few months after marriage. It was not planned or desired, we even considered abortion. I miscarried at 7-8 weeks. It was not a big deal at that time, shit happens. Being 19/21 years old we thought we had so much time ahead of us.

Memories of the 2nd and 3rd pregnancy are very blurry. One was completed by miscarrying at 9-10 weeks. For the 3rd one, it was no heartbeat on ultrasound at 12-13 weeks after 2-3 weeks of bed rest in the hospital. I won’t share all the details of hospitals in Kiev in early nineties. I will just tell you that last summer when I went to a hospital in Kiev with my husband who had a minor outpatient surgery done, I got first anxiety attack in my life.

So after third unsuccessful pregnancy and some very minor testing, doctors discovered that I was Rh negative and told me not to get pregnant for 3 years so that all the antibodies will weaken or something like that. Nobody new about Rhogam shot in our medicine at that time, I guess.

Fourth pregnancy happened after we moved to US. Everything was seemingly under control. Rhogam was shot. Still I miscarried again at 8-9 weeks.

For the fifth pregnancy we went to infertility clinic in Akron, OH and have done lots and lots of testing. Eureka! It turned out that I am a carrier of balanced chromosomal translocation. Genetic testing showed 22% chance of conceiving fetus without it or being a carrier as I am. One more (fifth) naturally conceive pregnancy ended in miscarriage again and fetus was tested positive for unbalanced translocation.

For sixth pregnancy in 2001, we decided to take real measures and planned in-vitro fertilization with preimplantation genetic diagnosis (PGD). At that time, there were 3 places in the country doing PGD. Yuri Verlinsky at Reproductive Genetics Institute in Chicago was a pioneer of this procedure. Through stimulation, was quite fruitful in producing 19 eggs. They were fertilized and given to us in a thermobox, which we put it in the back of our car and went to Chicago for PGD test. After anxious waiting for 3 days, the results came in – 3 out of 19 embryos were tested to be viable for implantation. Two were like me – carriers of balanced translocation and one was completely clear. Therefore, experimentally tested chance of me conceiving a normal pregnancy was 15%. We took those viable embryos back in the box to Ohio for implantation. One out them did not make it until 5th day of implantation. While the two were transferred and one got implanted and resulted in successful pregnancy.

We could not believe our happiness. Exactly, 10 years ago, when I was into 5th months of pregnancy in January of 2002 we moved to Albuquerque. At UNM hospital our new doctor could not believe our story. During detailed ultrasound, one of the measurements was within a norm, but on the higher end of the allowable range and therefore could be considered as weak indicator for small chance of Down syndrome. Amniocentesis was scheduled. In 2 days after amnio we received good news – no Down syndrome. In one week after amnio I came for a regular scheduled appointment and left with a heartbreaking news – fetus did not have heartbeat. Considered risk of amnio is 1:400 to 1:200. And I was the “lucky”, carrying 15%-probable (or even 5% if we count 1 successfully implanted out of 19 embryos) baby, one out of 200-400.

Two years later we decided to try one more time natural pregnancy. If not, we were thinking to adopt. This pregnancy was not welcomed at all. I was scared by the prospect of doctor visits, anticipation of bad news, D&C, etc. And 9 months later our first miracle child, Simon, was born through C-section. It was the present to our 13th wedding anniversary on March 28, 2005. It seemed that our family had to move into teen ages of the marriage to deserve a child. You can’t blame me for loving number 7 ever since. Seventh pregnancy gave us what we stopped dreaming for.

Here it was supposed to be an end of the story how we became luckiest people ever. But it is not the end.

Few years had passed and we became serious again about adopting 2nd child. But then I decided to try one more time natural pregnancy. It was beyond naive. Chance of successfully drawing 3 out of 19 twice in a row is 2.5%. And 4 years later, in 2009, our 2nd miracle child daughter Paulina was born through naturally conceived and carried eights pregnancy. You can’t blame me for loving number 8 ever since.

As my mom said “maybe you don’t have translocation anymore!”
I just think that the chance of being as lucky as we are is 2.5%! And we’re living it!

Tagged , , ,

Data analysis for vaccine awareness week

I have combined the following data into one data matrix:
2009 Vaccination data table (subset of most often given vaccines reflecting the trend) ;
– American Human Development Index by State from American Human development Project;

Classification of Blue and Red states;

First, I have applied PCA to just vaccination rates data, where states partisan classification was used for classes.
No correlation between vaccination rates and partisan class was observed.

Second, Principal Component Analysis was applied to all data (vaccination rates and human development index) with autoscaling.

PC1 captures 39% of variance in the data and separates samples by those having high rank (cumulative index), high Education, Income and Health index from those having low Rank. Mostly blue states and some red states (AK, ND, NE, UT, KS) have highest rank. Vaccination rates do not contribute into PC1 (close to 0) indicating that there is no direct correlation between human development index and vaccination.

PC 2 separates states by vaccination rates. Those on top have higher vaccination rates that those on the bottom of a biplot. There is week correlation (captured in ~16% of variance in the data) between vaccination rates and education and income index and rank.

4 groups of states are classified by PCA:
1. Red states having quite good vaccination rates and very bed HD index.
2. Mostly blue states and some red states having best vaccination rates and best HD index.
3. Purple states having worst vaccination rates and worst HD index.
4. Mixed states – some blue, some red and Co with very bed vaccination rates but high Health Index.

New (milder) version of visualization of sexual data

Map of sexual activities created by PCA


Some additional interpretation is included as well