Sunday, November 7, 2010

Biotechnology and Your Holiday Meal

Beer and wine, products of fermentation using yeast, are the most widely accepted products of biotechnology for consumption during the holiday season. Other more controversial food items are products of agricultural biotech, such as genetially modified (GM) vegetables or the oversized turkeys we now have, developed through traditional breeding. However, you may be surprised to learn that there are many long-standing traditions for Christmas meals, that require a knowledge of either how to use microorganisms, or control their growth. Are any of these on your holiday table?
Cheese – of course not necessarily a tradition at Christmas time, cheese is an example of biotechnology in the dairy industry and cheese balls are nearly a staple at holiday parties. Then there is the wine and cheese party, or those who enjoy a slice of cheddar with their apple pie following turkey dinner. Cheese is obtained by letting milk become acidified, usually by bacteria, or is also sometimes made using molds.
Jams and Jellies, such as might be found in a Christmas Yule Log cake, or cranberry sauce, are products of biochemical reactions, although early chefs probably did not understand the reactions taking place in their pots. When mashed fruit is boiled in sugar and water, a chemical reaction takes place between the sugar, acids from the fruit, and pectin, a polysaccharide (polymer of sugars) found in the fruit. Once the reaction has taken place and mixture is cooled, it will form a semi-solid gel. Our knowledge of enzymes has advanced since the advent of jelly, such that we now use pectinase enzymes to prevent the gelling reaction from taking place. This is key to making home-made fruit wines, and preventing them from clouding.
Wassail – Ever wonder what they meant in the traditional Christmas song Here We Come A-Wassailing…? Basically it means drinking – a beverage made originally from spiced wine and later with spiced British ale, often enriched with cream and heated until frothy on top (called Lamb's Wool). Many more variations exist, most likely dependent on the wealth of the maker and available ingredients.
Christmas Pudding – otherwise known as Plum Pudding or Christmas Cake, is traditionally made early, from a week ahead to as much as a year ahead, and allowed to age. The Sunday before the beginning of Advent is known as Stir-up Sunday as it is the tradition to make the pudding then. To age the cake, it is doused in some kind of alcohol and wrapped in cheesecloth that has also been soaked in alcohol. The alcohol seems to serve a dual purpose – both preserving the cake and adding enhanced flavor as it soaks in over time. Note: if mold appears on the cake, according to The Encyclopedia of Country Living, that's alright, just trim it off and eat the cake.
Stollen – is a traditional cake made in Europe, particularly Germany, for the twelfth night, twelve days after the birth of Christ. The cake might also be called Christollen. Although many varients exist, the basis of Stollen is the sweet yeast-based dough and addition of dried fruit and sometimes nuts. Another traditional Christmas bread requiring yeast is the Italian Pannetone.

How PCR Works

What is PCR?

The polymerase chain reaction (PCR) is a molecular genetic technique for making multiple copies of a gene, and is also part of the gene sequencing process. Gene copies are made using a sample of DNA and the technology is good enough to make multiple copies from one single copy of the gene found in the sample. PCR amplification of a gene to make millions of copies, allows for detection and identification of gene sequences using visual techniques based on size and charge (+ or -) of the piece of DNA.
Under controlled conditions, small segments of DNA are generated by enzymes known as DNA polymerases, that add complimentary deoxynucleotides (dNTPs) to a piece of DNA known as the "template". Even smaller pieces of DNA, called "primers" are used as a starting point for the polymerase. Primers are small man-made pieces of DNA (oligomers), usually between 15 and 30 nucleotides long. They are made by knowing or guessing short DNA sequences at the very ends of the gene being amplified. During PCR, the DNA being sequenced is heated and the double strands separate. Upon cooling, the primers bind to the template (called annealing) and create a place for the polymerase to begin.
PCR was made possible by the discovery of thermophiles and thermophilic polymerase enzymes (enzymes that maintain structural integrity and functionality after heating at high temperatures).

The Technique Explained

A mixture is created, with optimized concentrations of the DNA template, polymerase enzyme, primers and dNTPs. The ability to heat the mixture without denaturing the enzyme allows for denaturing of the double helix of DNA sample at temperatures in the range of 94 degrees Celsius. Following denaturation, the sample is cooled to a more moderate range, around 54 degrees, which facilitates the annealing (binding) of the primers to the single-stranded DNA templates. In the third step of the cycle, the sample is reheated to 72 degrees, the ideal temperature for Taq DNA Polymerase, for elongation. During elongation, DNA polymerase uses the original single strand of DNA as a template to add complimentary dNTPs to the 3’ ends of each primer and generate a section of double-stranded DNA in the region of the gene of interest. Primers that have annealed to DNA sequences that are not an exact match do not remain annealed at 72 degrees, thus limiting elongation to the gene of interest.

DNA Sequencing

The field of biotechnology is one of constant change. The rapid growth and development of cutting-edge research is dependent on the innovation and creativity of scientists and their ability to see the potential in a basic molecular technique and apply it to new processes. The advent of PCR opened up many doors in genetic research, including a means of identifying different genes based on their DNA sequences. DNA sequencing is also dependent on our ability to use gel electrophoresis to separate strands of DNA that differ in size by as little as one base pair.
In the late 1970's, two DNA sequencing techniques for longer DNA molecules were invented. These were the Sanger (or dideoxy) method and the Maxam-Gilbert (chemical cleavage) method. The Maxam-Gilbert method is based on nucleotide-specfic cleavage by chemicals and is best used to sequence oligonucleotides (short nucleotide polymers, usually smaller than 50 base-pairs in length). The Sanger method is more commonly used because it has been proven technically easier to apply, and, with the advent of PCR and automation of the technique, is easily applied to long strands of DNA including some entire genes. This technique is based on chain termination by dideoxy nucleotides during PCR elongation reactions.
In the Sanger method, the DNA strand to be analyzed is used as a template and DNA polymerase is used, in a PCR reaction, to generate complimentary strands using primers. Four different PCR reaction mixtures are prepared, each containing a certain percentage of dideoxynucleoside triphosphate (ddNTP) analogs to one of the four nucleotides (ATP, CTP, GTP or TTP). Synthesis of the new DNA strand continues until one of these analogs is incorporated, at which time the strand is prematurely truncated. Each PCR reaction will end up containing a mixture of different lengths of DNA strands, all ending with the nucleotide that was dideoxy labeled for that reaction. Gel electrophoresis is then used to separate the strands of the four reactions, in four separate lanes, and determine the sequence of the original template based on what lengths of strands end with what nucleotide.
In the automated Sanger reaction, primers are used that are labeled with four different coloured fluorescent tags. PCR reactions, in the presence of the different dideoxy nucleotides, are performed as described above. However, next, the four reaction mixtures are then combined and applied to a single lane of a gel. The colour of each fragment is detected using a laser beam and the information is collected by a computer which generates chromatograms showing peaks for each colour, from which the template DNA sequence can be determined.
Typically, the automated sequencing method is only accurate for sequences up to a maximum of about 700-800 base-pairs in length. However, it is possible to obtain full sequences of larger genes and, in fact, whole genomes, using step-wise methods such as Primer Walking and Shotgun sequencing.
In Primer Walking, a workable portion of a larger gene is sequenced using the Sanger method. New primers are generated from a reliable segment of the sequence, and used to continue sequencing the portion of the gene that was out of range of the original reactions.
Shotgun sequencing entails randomly cutting the DNA segment of interest into more appropriate (manageable) sized fragments, sequencing each fragment, and arranging the pieces based on overlapping sequences. This technique has been made easier by the application of computer software for arranging the overlapping pieces.

Nanomedicine and Disease

Nanotechnology refers to the use man-made of nano-sized (typically 1-100 billionths of a meter) particles for industrial or medical applications suited to their unique properties. Physical properties of known elements and materials can change as their surface to area ratio is dramatically increased, i.e. when nanoscale sizes are achieved. These changes do not take place when going from macro to micro scale. Changes in physical properties such as colloidal properties, solubility and catalytic capacity have been found very useful in areas of biotechnology, such as bioremediation and drug delivery.
The very different properties of the different types of nanoparticles have resulted in novel applications. For example, compounds known to be generally inert materials, may become catalysts. The extremely small size of nanoparticles allows them to penetrate cells and interact with cellular molecules. Nanoparticles often also have unique electrical properties and make excellent semiconductors and imaging agents. Because of these qualities, the science of nanotechnology has taken off in recent years, with testing and documentation of a broad spectrum of novel uses for nanoparticles, particularly in nanomedicine.
The development of nanotechnologies for nanomedical applications has become a priority of the National Institutes of Health (NIH). Between 2004 and 2006, the NIH established a network of eight Nanomedicine Development Centers, as part of the NIH Nanomedicine Roadmap Initiative. In 2005, The National Cancer Institute (NCI) committed 144.3 million over 5 years for its “Alliance for Nanotechnology in Cancer” program which funds seven Centres of Excellence for Cancer Nanotechnology (Kim, 2007). The funding supports various research projects in areas of diagnostics, devices, biosensors, microfluidics and therapeutics.
Among the long term objectives of the NIH initiative are goals such as being able to use nanoparticles to seek out cancer cells before tumors grow, remove and/ or replace “broken” parts of cells or cell mechanisms with miniature, molecular-sized biological “machines”, and use similar “machines” as pumps or robots to deliver medicines when and where needed within the body. All of these ideas are feasible based on present technology. However, we don’t know enough about the physical properties of intracellular structures and interactions between cells and nanoparticles, to currently reach all of these objectives. The primary goal of the NIH is to add to current knowledge of these interactions and cellular mechanisms, such that precisely-built nanoparticles can be integrated without adverse side-effects.
Many different types of nanoparticles currently being studied for applications in nanomedicine. They can be carbon-based skeletal-type structures, such as the fullerenes, or micelle-like, lipid-based liposomes, which are already in use for numerous applications in drug delivery and the cosmetic industry. Colloids, typically liposome nanoparticles, selected for their solubility and suspension properties are used in cosmetics, creams, protective coatings and stain-resistant clothing. Other examples of carbon-based nanoparticles are chitosan and alginate-based nanoparticles described in the literature for oral delivery of proteins, and various polymers under study for insulin delivery.
Additional nanoparticles can be made from metals and other inorganic materials, such as phosphates. Nanoparticle contrast agents are compounds that enhance MRI and ultrasound results in biomedical applications of in vivo imaging. These particles typically contain metals whose properties are dramatically altered at the nano-scale. Gold “nanoshells” are useful in the fight against cancer, particularly soft-tissue tumors, because of their ability to absorb radiation at certain wavelengths. Once the nanoshells enter tumor cells and radiation treatment is applied, they absorb the energy and heat up enough to kill the cancer cells. Positively-charged silver nanoparticles adsorb onto single-stranded DNA and are used for its detection. Many other tools and devices for in vivo imaging (fluorescence detection systems), and to improve contrast in ultrasound and MRI images, are being developed.
There are numerous examples of disease-fighting strategies in the literature, using nanoparticles. Often, particularly in the case of cancer therapies, drug delivery properties are combined with imaging technologies, so that cancer cells can be visually located while undergoing treatment. The predominant strategy is to target specific cells by linking antigens or other biosensors (e.g. RNA strands) to the surface of the nanoparticles that detect specialized properties of the cell walls. Once the target cell has been identified, the nanoparticles will adhere to the cell surface, or enter the cell, via a specially designed mechanism, and deliver its payload.
One the drug is delivered, if the nanoparticle is also an imaging agent, doctors can follow its progress and the distribution of the cancer cell is known. Such specific targeting and detection will aid in treating late-phase metastasized cancers and hard-to-reach tumors and give indications of the spread of those and other diseases. It also prolongs the life of certain drugs that have been found to last longer inside a nanoparticle than when the tumor was directly injected, since often drugs that have been injected into a tumor diffuse away before effectively killing the tumor cells.

Optimizing Enzyme Processes

Enzymes are frequently used in biotechnology to carry out specific biological reactions, either as chemical replacements for industrial processes or for the production of commercial bioproducts, foods and/or drugs. A large proportion of research goes into finding or creating enzymes with specific properties but sometimes the enzyme isn't 100% suited to the conditions under which it is needed. Enhancing the characteristics of the enzyme to make it more suitable, characteristics like thermostability, pH optima, or substrate specificity, can be done using a number of approaches. Each approach differs in terms of the amount of control and specificity it offers in terms of changing the protein or gene on a molecular level.
1) Natural Selection
Technically, this is not an example of how biotechnology is used to change a gene or enzyme, but it is an example of how researchers might obtain enzymes with desired traits, in the simplest, most obvious way possible. The simplest choice is to do nothing on the molecular level, but seek out naturally-occurring proteins with the characteristics that suit our intended needs.
Although this is one of the most traditional of applied early biotechnological practices, utilized long before we had the scientific “know-how” to control genetic and protein sequences, it is still in use today. For example, in the hunt for truly thermostable enzymes, capable of metabolic catalysis at temperatures as high as 80-100°C, scientists actively search deep-sea hydrothermal vents worldwide, for new species of bacteria expressing new genes. 2) Selective Pressure
If a naturally-occurring enzyme with the desired traits is not readily available, the next simplest option to obtaining one is to create an environment of accelerated natural selection. That is, take a microorganism expressing an enzyme with properties as close to the desired traits as you can find, and expose it to conditions of gradually-increased intensity. If the thermostable enzyme you have has an optimum temperature of 60°C, you might try growing the microorganism at 65, then 70, then 75°C and gradually work up to the temperature you desire, in the presence of the substrate of interest. If the substrate is a carbon source, providing it as the only carbon source in the media forces the microbe to utilize that source as food, and gradually raising the temperature might result in adaptations to more effective enzyme activity at the higher temperatures.
A similar approach can be used to change substrate specificity for an enzyme for which multiple carbon sources can be used, some more preferentially than others. The key is to have a means of selecting for the strains that have adapted and show some advantage, whether it be out and out survival, or faster growth on certain substrates, at certain temperatures, or at a certain pH. Examples of this are a bioindicator that is detectable through some kind of analytical method, once the dersired enzyme reaction takes place, or a pH indicator, if the reaction changes the pH of the culture growth media.

Revivicor's Regenerative Medicine Project

I wanted to write a series of case studies on startup biotech companies that are focused on stem cell research. Revivicor intrigued me because of their novel approach to stem cell isolation. Their regenerative medicine approach has potential to bypass many of the heavily debated ethical issues surrounding the use of embryonic or fetal tissues, as is done in therapeutic cloning. Being situated in the USA, this is important, since the past few years have been ripe with debate on this topic. The debate has heated up particularly in recent years, starting with the passing of a bill (House of Representatives, in May 2005, Senate, July 2006) to loosen limitations on embryonic stem cell research, and the subsequent veto by President Bush. The House has continued attempts to pass similar bills into 2007.
Revivicor describes itself as an early-stage biotechnology company developing engineered tissues and whole organs for human transplantation. In fact, the company is privately-owned spin-off from PPL Therapeutics, famous for having produced “Dolly”, the world’s first cloned sheep. Formed in 2003, Revivicor included 25 professionals from PPL’s US division in Blacksburg, Virginia, in addition to a Pennsylvanian subsidiary. It consists of Revivicor Holdings, a Delaware company, under which Revivicor Inc. of Virginia is the primary operating subsidiary and the core research and development arm. Clinical work is performed by Revivicor Inc. of Pennsylvania at University of Pittsburg Medical Centre (UPMC).

World Leader in Animal Cloning and Transplant Technology

In addition to its stem cell projects, Revivicor leads the field in cloning transgenic pigs and production of whole organs and therapeutic cells from pigs. Much of their research is on prevention of hyperacute rejection (HAR) in cross-species transplantations. They aim to perform pig-to-primate transplant studies, islet cell transplants for reversal of diabetes, and possibly begin human clinical trials, within the next 2 years.
At the beginning of this decade, Revivicor was also involved in research on the use of skin cells and a process of “de-differentiation” to restore them to a pluripotent state, i.e. stem cells. It was intended that these IPSCs could then be used to generate differentiated tissues of different organs that would be produced for transplants and regenerative treatments.

The Stem Cell Program

Revivicor’s stem cell research is based on “cellular reprogramming”. They use skin cells as precursers, because of the ease of obtaining them from any subject. The technology proof of concept was demonstrated for the de-differentiation approach using rhesus, porcine and bovine cells. The stem cell program was supported through a grant from the US Department of Commerce Advanced Technology Program (ATP) and, according to the Revivicor website, consisted of $1.9 million over three years ending January 2004. The regenerative medicine approach, as opposed to therapeutic cloning using fetal/embryonic stem cells, appeared to have enormous potential because of its ability to bypass the ethical restrictions of cloning.

Funding

Revivicor gets its support from an investment group led by the UPMC, which owns 31% interest. The balance of the group consists of Highmark Health Ventrues Investment Fund, L.P., and Fujisawa Investments for Entrepreneurship, L.P.
According to CEO Dr. David Aayres, ATP has also been critical to the survival of Revivicor. The company has obtained four grants in total from ATP. The xenotransplantation research project, focused on production of organs and tissues from cloned, genetically modified pigs, for human transplantation was funded in part from a $2 million ATP grant received by PPL in November 1999. The grant for stem cell research was the second ATP grant obtained by Revivicor. A third grant of $1.9 million over 3 years, was awarded to fund further work on the development of safe therapeutic products from pigs for xenotransplantation. In September 2004, a 2 year grant of $1.8 million was obtained for research utilizing Revivicor’s platform technologies in gene knockout and somatic cell nuclear transfer to inactivate the antibody producing genes in pigs and replace them with the human equivalents. This work is also funded in part (3 years, $3.1 million) by the US Dept. of Defense (DARPA). The goal is to produce human polyclonal antibodies as potential vaccines against biowarfare agents (i.e. anthrax), and infectious viruses like HIV and hepatitis. Revivicor retained rights to commercialization of all non-military applications of its gene knock-out animals, which can be used against a broad range of infectious diseases. The market for this work was projected by Revivicor to be worth over $5 billion by 2010.

deCODE Genetics

deCODE Genetics Inc., an Iceland-based biotech company with laboratories in several American cities, specializes in the discovery of genetic risk factors for diseases including Type 2 Diabetes, glaucoma, Alzheimer's, cardiovascular disease and skin, bladder, breast and prostate cancers. For the past decade, it appeared that the company was successfully applying that information to develop DNA-based diagnostic tests, as they were involved in licensing agreements and research projects with several major players in the pharmaceutical industry including Celera, Wyeth and Merck. However, recent events indicate otherwise.
Headquartered in Reykjavik, Iceland, the company was founded in August 1996 by Dr. Kari Stefansson. After the human genome sequence was completed in 2003, deCODE attempted to capitalize on Iceland’s excellent medical records and genealogical information available on its close-knit, rather isolated population, to determine the genetic factors that cause several major diseases. Media releases on the discoveries and developments of deCODE products painted a picture of a very promising and strong player in the biomedical industry, and a sure bet for those choosing biotech stocks. However, these releases may have been deceptive, since, on November 17, 2009, the company filed for Chapter 11 bankruptcy protection. A sign of promise, company stock had risen in value to over $30 by June 2000, only to drop to below $2 by 2008.
According to The New York Times’ Nicholas Wade, the demise of the company was not necessarily based on poor business management as much as it was a case of over simplifying the causes of disease. Experts now believe that the genetic causes of disease are more complex than deCODE’s founders realized and "the mutations that deCODE and others detected in each disease turned out to account for a small fraction of the overall incidence…responsible for too few cases to support the development of widely used diagnostic tests or blockbuster drugs".
On January 5, 2010, the company received notice from Nasdaq that it’s common stock was suspended and will be delisted. The bankruptcy affects the parent company based in the United States, which will likely be liquidated, according to deCODE’s website. It is expected that the Icelandic subsidiary, Islensk Erfdagreining (IE) will be sold and will continue to investigate genetic causes of diseases, hopefully with better success and a better understanding of how polymorphisms can play a role in disease.

Multi-purpose HealthCare Telemedicine Systems

The provision of effective emergency telemedicine and home monitoring solutions are the major fields of interest discussed in this study. Ambulances, Rural Health Centers (RHC) or other remote health location such as Ships navigating in wide seas are common examples of possible emergency sites, while critical care telemetry and telemedicine home follow-ups are important issues of telemonitoring. In order to support the above different growing application fields we created a combined real-time and store and forward facility that consists of a base unit and a telemedicine (mobile) unit. This integrated system: can be used when handling emergency cases in ambulances, RHC or ships by using a mobile telemedicine unit at the emergency site and a base unit at the hospital-expert's site, enhances intensive health care provision by giving a mobile base unit to the ICU doctor while the telemedicine unit remains at the ICU patient site and enables home telemonitoring, by installing the telemedicine unit at the patient's home while the base unit remains at the physician's office or hospital. The system allows the transmission of vital biosignals (3–12 lead ECG, SPO2, NIBP, IBP, Temp) and still images of the patient. The transmission is performed through GSM mobile telecommunication network, through satellite links (where GSM is not available) or through Plain Old Telephony Systems (POTS) where available. Using this device a specialist doctor can telematically "move" to the patient's site and instruct unspecialized personnel when handling an emergency or telemonitoring case. Due to the need of storing and archiving of all data interchanged during the telemedicine sessions, we have equipped the consultation site with a multimedia database able to store and manage the data collected by the system. The performance of the system has been technically tested over several telecommunication means; in addition the system has been clinically validated in three different countries using a standardized medical protocol.

Background

Telemedicine is defined as the delivery of health care and sharing of medical knowledge over a distance using telecommunication means. Thus, the aim of Telemedicine is to provide expert-based health care to understaffed remote sites and to provide advanced emergency care through modern telecommunication and information technologies. The concept of Telemedicine was introduced about 30 years ago through the use of nowadays-common technologies like telephone and facsimile machines. Today, Telemedicine systems are supported by State of the Art Technologies like Interactive video, high resolution monitors, high speed computer networks and switching systems, and telecommunications superhighways including fiber optics, satellites and cellular telephony [1].
The availability of prompt and expert medical care can meaningfully improve health care services at understaffed rural or remote areas. The provision of effective emergency Telemedicine and home monitoring solutions are the major fields of interest discussed in this study. There are a wide variety of examples where those fields are crucial. Nevertheless, Ambulances, Rural Health Centers (RHC) and Ships navigating in wide seas are common examples of possible emergency sites, while critical care telemetry and Telemedicine home follow-ups are important issues of telemonitoring. In emergency cases where immediate medical treatment is the issue, recent studies conclude that early and specialized pre-hospital patient management contributes to the patient's survival [2]. Especially in cases of serious head injuries, spinal cord or internal organs trauma, the way the incidents are treated and transported is crucial for the future well being of the patients.
A quick look to past car accident statistics points out clearly the issue: During 1997, 6753500 incidents were reported in the United States [3] from which about 42000 people lost their lives, 2182660 drivers and 1125890 passengers were injured. In Europe during the same period 50000 people died resulting of car crash injuries and about half a million were severely injured. Furthermore, studies completed in 1997 in Greece [4], a country with the world's third highest death rate due to car crashes, show that 77,4 % of the 2500 fatal injuries in accidents were injured far away from any competent healthcare institution, thus resulting in long response times. In addition, the same studies reported that 66% of deceased people passed away during the first 24 hours.
Coronary artery diseases is another common example of high death rates in emergency or home monitoring cases since still two thirds of all patients die before reaching the central hospital. In a study performed in the UK in 1998 [5], it is sobering to see that among patient above 55 years old, who die from cardiac arrest, 91% do so outside hospital, due to a lack of immediate treatment. In cases where thrombolysis is required, survival is related to the "call to needle" time, which should be less than 60 minutes [6]. Thus, time is the enemy in the acute treatment of heart attack or sudden cardiac death (SCD). Many studies worldwide have proven that a rapid response time in pre-hospital settings resulting from treatment of acute cardiac events decreases mortality and improves patient outcomes dramatically [7]-[12]. In addition, other studies have shown that 12-lead ECG performed during transportation increase available time to perform thrombolytic therapy effectively, thus preventing death and maintaining heart muscle function [13]. The reduction of all those high death rates is definitely achievable through strategies and measures, which improve access to care, administration of pre-hospital care and patient monitoring techniques.
Critical care telemetry is another case of handling emergency situations. The main point is to monitor continuously intensive care units' (ICU) patients at a hospital and at the same time to display all telemetry information to the competent doctors anywhere, anytime [14]. In this pattern, the responsible doctor can be informed about the patient's condition at a 24-hour basis and provide vital consulting even if he's not physically present. This is feasible through advanced telecommunications means or in other words via Telemedicine.
Another important Telemedicine application field is home monitoring. Recent studies show that [15] the number of patients being managed at home is increasing, in an effort to cut part of the high hospitalization's cost, while trying to increase patient's comfort. Using low-cost televideo equipment that runs over regular phone lines, providers are expanding the level while reducing the frequency of visits to healthcare institutions [16]. In addition, a variety of diagnostic devices can be attached to the system giving to the physician the ability to see and interact directly with the patient. For example, pulse oximetry and respiratory flow data can be electronically transmitted (for patients with chronic obstructive pulmonary disease). Diabetes patients can have their blood glucose and insulin syringe monitored prior to injection for correct insulin dosage. Furthermore, obstetric patients can have their blood pressure and fetal heart pulses monitored remotely and stay at home rather than prematurely admitted to a hospital.
It is common knowledge that people that monitor patients at home or are the first to handle emergency situations do not always have the required advanced theoretical background and experience to manage properly all cases. Emergency Telemedicine and home monitoring can solve this problem by enabling experienced neurosurgeons, cardiologists, orthopedics and other skilled people to be virtually present in the emergency medical site. This is done through wireless transmission of vital biosignals and on scene images of the patient to the experienced doctor. A survey [17] of the Telemedicine market states that emergency Telemedicine is the fourth most needed Telemedicine topic with 39.8% coverage of market requests while home healthcare covers 23.1%. The same survey also points out that the use of such state of the art technologies has 23% enhanced patient outcomes.
Several systems that could cover emergency cases [18]-[23], home monitoring cases [24]-[25] and critical care telemetry [14] have been presented over the years. Recent developments in mobile telecommunications and information technology enhanced capability in development of telemedicine systems using wireless communication means [26]-[32]. In most cases however only the store and forward procedure was successfully elaborated, while the great majority of emergency cases do require real time transmition of data.
In order to cover as much as possible of the above different growing demands we created a combined real-time and store and forward facility that consists of a base unit and a telemedicine unit where this integrated system:
• Can be used when handling emergency cases in ambulances, RHC or ships by using the Telemedicine unit at the emergency site and the expert's medical consulting at the base unit
• Enhances intensive health care provision by giving the telemedicine unit to the ICU doctor while the base unit is incorporated with the ICU's in-house telemetry system
• Enables home telemonitoring, by installing the telemedicine unit at the patient's home while the base unit remains at the physician's office or hospital.
The Telemedicine device is compliant with some of the main vital signs monitor manufacturers like Johnson & Johnson CRITIKON Dinamap Plus and Welch Allyn – Protocol (Propaq). It is able to transmit both 3 and 12 lead ECGs, vital signs (non-invasive blood pressure, temperature, heart rate, oxygen saturation and invasive blood pressure) and still images of a patient by using a great variety of communication means (Satellite, GSM and Plain Old Telephony System – POTS). The base unit is comprised of a set of user-friendly software modules that can receive data from the Telemedicine device, transmit information back to it and store all data in a database at the base unit. The communication between the two parts is based on the TCP/IP protocol. The general framework for the above system was developed under EU funded TAP (Telematics Applications Programme) projects, the EMERGENCY 112 project(HC 4027)[33] and the Ambulance project(HC1001) [22].

Methods

Trends and needs of Telemedicine systems

As mentioned above, scope of this study was to design and implement an integrated Telemedicine system, able to handle different Telemedicine needs especially in the fields of:
• Emergency health care provision in ambulances, Rural Hospital Centers (or any other remote located health center) and navigating Ships
• Intensive care patients monitoring
• Home telecare, especially for patients suffering from chronic and /or permanent diseases (like heart disease).
In other words we determined a "Multi-purpose" system consisting of two major parts: a) Telemedicine unit (which can be portable or not portable depending on the case) and b) Base unit or doctor's unit (which can be portable or not portable depending on the case and usually located at a Central Hospital).
Figure 1 describes the overall system architecture. In each different application the Telemedicine unit is located at the patient's site, whereas the base unit (or doctor's unit) is located at the place where the signals and images of the patient are sent and monitored. The Telemedicine device is responsible to collect data (biosignals and images) from the patient and automatically transmit them to the base unit. The base unit is comprised of a set of user-friendly software modules, which can receive data from the Telemedicine device, transmit information back to it and store important data in a local database. The system has several different applications (with small changes each time), according to the current healthcare provision nature and needs.

Computational model

Background

Coronary artery bypass grafting surgery is an effective treatment modality for patients with severe coronary artery disease. The conduits used during the surgery include both the arterial and venous conduits. Long- term graft patency rate for the internal mammary arterial graft is superior, but the same is not true for the saphenous vein grafts. At 10 years, more than 50% of the vein grafts would have occluded and many of them are diseased. Why do the saphenous vein grafts fail the test of time? Many causes have been proposed for saphenous graft failure. Some are non-modifiable and the rest are modifiable. Non-modifiable causes include different histological structure of the vein compared to artery, size disparity between coronary artery and saphenous vein. However, researches are more interested in the modifiable causes, such as graft flow dynamics and wall shear stress distribution at the anastomotic sites. Formation of intimal hyperplasia at the anastomotic junction has been implicated as the root cause of long- term graft failure.
Many researchers have analyzed the complex flow patterns in the distal sapheno-coronary anastomotic region, using various simulated model in an attempt to explain the site of preferential intimal hyperplasia based on the flow disturbances and differential wall stress distribution. In this paper, the geometrical bypass models (aorto-left coronary bypass graft model and aorto-right coronary bypass graft model) are based on real-life situations. In our models, the dimensions of the aorta, saphenous vein and the coronary artery simulate the actual dimensions at surgery. Both the proximal and distal anastomoses are considered at the same time, and we also take into the consideration the cross-sectional shape change of the venous conduit from circular to elliptical. Contrary to previous works, we have carried out computational fluid dynamics (CFD) study in the entire aorta-graft-perfused artery domain. The results reported here focus on (i) the complex flow patterns both at the proximal and distal anastomotic sites, and (ii) the wall shear stress distribution, which is an important factor that contributes to graft patency.

Methods

The three-dimensional coronary bypass models of the aorto-right coronary bypass and the aorto-left coronary bypass systems are constructed using computational fluid-dynamics software (Fluent 6.0.1). To have a better understanding of the flow dynamics at specific time instants of the cardiac cycle, quasi-steady flow simulations are performed, using a finite-volume approach. The data input to the models are the physiological measurements of flow-rates at (i) the aortic entrance, (ii) the ascending aorta, (iii) the left coronary artery, and (iv) the right coronary artery.

Results

The flow field and the wall shear stress are calculated throughout the cycle, but reported in this paper at two different instants of the cardiac cycle, one at the onset of ejection and the other during mid-diastole for both the right and left aorto-coronary bypass graft models. Plots of velocity-vector and the wall shear stress distributions are displayed in the aorto-graft-coronary arterial flow-field domain. We have shown (i) how the blocked coronary artery is being perfused in systole and diastole, (ii) the flow patterns at the two anastomotic junctions, proximal and distal anastomotic sites, and (iii) the shear stress distributions and their associations with arterial disease.

Conclusion

The computed results have revealed that (i) maximum perfusion of the occluded artery occurs during mid-diastole, and (ii) the maximum wall shear-stress variation is observed around the distal anastomotic region. These results can enable the clinicians to have a better understanding of vein graft disease, and hopefully we can offer a solution to alleviate or delay the occurrence of vein graft disease.

Biological effects

The literature on biological effects of magnetic and electromagnetic fields commonly utilized in magnetic resonance imaging systems is surveyed here. After an introduction on the basic principles of magnetic resonance imaging and the electric and magnetic properties of biological tissues, the basic phenomena to understand the bio-effects are described in classical terms. Values of field strengths and frequencies commonly utilized in these diagnostic systems are reported in order to allow the integration of the specific literature on the bio-effects produced by magnetic resonance systems with the vast literature concerning the bio-effects produced by electromagnetic fields. This work gives an overview of the findings about the safety concerns of exposure to static magnetic fields, radio-frequency fields, and time varying magnetic field gradients, focusing primarily on the physics of the interactions between these electromagnetic fields and biological matter. The scientific literature is summarized, integrated, and critically analyzed with the help of authoritative reviews by recognized experts, international safety guidelines are also cited.

Introduction

Safety issues and discussions about potential hazards associated with magnetic resonance imaging (MRI) systems and procedures have been extremely controversial over the past decade: partly because of the disputed assertions about the role of electromagnetic fields in carcinogenesis or the promotion of abnormalities in growth and development [1-3]; partly because the assumption that MRI was inherently a safe procedure had reduced the importance of the publication of negative results [4]. Since the introduction of MRI as a clinical modality in the early 1980s, more than 100,000,000 diagnostic procedures (estimated) have been completed worldwide, with relatively few major incidents [5,6].
Most reported cases of MRI related injuries have been caused by misinformation related to the MR safety aspects of metallic objects, implants, and biomedical devices [7,8]. In fact, the MR environment may be unsafe for patients with certain implants, primarily due to movement or dislodgment of objects made from ferromagnetic materials [9], but also because of heating and induction of electrical currents, which may present risks to patients with implants or external devices [10]. These safety problems are typically associated with implants that have elongated configurations or that are electronically activated (e.g. neurostimulation systems, cardiac pacemakers, etc.). In the MR environment, magnetic field-related translational attraction and torque may cause hazards to patients and individuals with such implants. The risks are proportional to the strength of the static magnetic field, the strength of the spatial gradient, the mass of the object, its shape and its magnetic susceptibility. Furthermore, the intended in vivo use of the implant or device must be taken into consideration because existing counteracting forces may be present that effectively prevent movement or dislodgment of the object. To date, more than one thousand implants and objects have been tested for MR safety or compatibility. This information is readily available to MR healthcare professionals, though it requires heightened awareness by the MR community to continually review and update their policies and procedures pertaining to MR safety based on the information in the relevant medical literature [11]. Physicians are aware of the absolute contraindications to MRI with regard to implantable devices, less familiar is the potential for an MRI-induced thermal or electrical burn associated with induced currents in conductors in contact with the patient's body. Although detailed studies concerning the burn hazard in MRI have not yet been reported, recent reports have, however, indicated that direct electromagnetic induction in looped cables in contact with the patient may be responsible for excessive heating [12-14].
A comprehensive presentation and discussion of MR related hazardous effects is beyond the scope of this review, thus we will limit the discussion to bio-effects produced by MRI systems acting directly on the human body.
Several research studies have been conducted over the past thirty years in order to assess the potential dangerous bio-effects associated with exposure to MRI diagnostics. Because of the complexity and importance of this issue, most of these works are dedicated to separately examining biological effects produced by a particular magnetic or electromagnetic field source utilized in MRI. Moreover, the scientific literature proliferates in an ever-increasing number of studies concerning biological effects produced by the interactions of biological matter with electromagnetic fields. Thus, there is a need to integrate and summarize the current findings about this topic and, at the same time, provide the basic knowledge to understand the physics of the interactions between electromagnetic fields and biological systems.
In the present work, after an introduction on the basic principles of MRI systems and the electric and magnetic properties of biological tissues, the basic principles needed to understand the bio-effects caused by the three main sources of electromagnetic fields utilized in MRI procedures are described.

Basic principles of MRI procedures

Three different types of electromagnetic fields are utilized in creating an image based on magnetic resonance:
1. the static magnetic field, , which aligns the proton spins and generates a net magnetization vector in the human body;
2. the gradient magnetic field, which produces different resonant frequencies for aligned protons, depending on their spatial positions on the gradient axes; these gradient fields allow for the spatial localization of bi-dimensional MRI slices and hence the reconstruction of three dimensional MRI images;
3. the radio-frequency electromagnetic wave, centered at the proton resonant frequency, which rotates the vector out of the direction of the static magnetic field; the time during which the magnetization vector returns to the equilibrium is different for each tissue, and this results in the two main imaging parameters, T1 and T2, which directly relate to the image contrast.
These three fields are essential features of MRI procedures, and each interacts with the electromagnetic properties of biological tissues.

Evaluation

Background

Recent studies have shown the potential suitability of magnesium alloys as biodegradable implants. The aim of the present study was to compare the soft tissue biocompatibility of MgCa0.8 and commonly used surgical steel in vivo.

Methods

A biodegradable magnesium calcium alloy (MgCa0.8) and surgical steel (S316L), as a control, were investigated. Screws of identical geometrical conformation were implanted into the tibiae of 40 rabbits for a postoperative follow up of two, four, six and eight weeks. The tibialis cranialis muscle was in direct vicinity of the screw head and thus embedded in paraffin and histologically and immunohistochemically assessed. Haematoxylin and eosin staining was performed to identify macrophages, giant cells and heterophil granulocytes as well as the extent of tissue fibrosis and necrosis. Mouse anti-CD79alpha and rat anti-CD3 monoclonal primary antibodies were used for B- and T-lymphocyte detection. Evaluation of all sections was performed by applying a semi-quantitative score.

Results

Clinically, both implant materials were tolerated well. Histology revealed that a layer of fibrous tissue had formed between implant and overlying muscle in MgCa0.8 and S316L, which was demarcated by a layer of synoviocyte-like cells at its interface to the implant. In MgCa0.8 implants cavities were detected within the fibrous tissue, which were surrounded by the same kind of cell type. The thickness of the fibrous layer and the amount of tissue necrosis and cellular infiltrations gradually decreased in S316L. In contrast, a decrease could only be noted in the first weeks of implantation in MgCa0.8, whereas parameters were increasing again at the end of the observation period. B-lymphocytes were found more often in MgCa0.8 indicating humoral immunity and the presence of soluble antigens. Conversely, S316L displayed a higher quantity of T-lymphocytes.

Conclusions

Moderate inflammation was detected in both implant materials and resolved to a minimum during the first weeks indicating comparable biocompatibility for MgCa0.8 and S316L. Thus, the application of MgCa0.8 as biodegradable implant material seems conceivable. Since the inflammatory parameters were re-increasing at the end of the observation period in MgCa0.8 it is important to observe the development of inflammation over a longer time period in addition to the present study.

Probabilistic cell model

Background

Methods of manual cell localization and outlining are so onerous that automated tracking methods would seem mandatory for handling huge image sequences, nevertheless manual tracking is, astonishingly, still widely practiced in areas such as cell biology which are outside the influence of most image processing research. The goal of our research is to address this gap by developing automated methods of cell tracking, localization, and segmentation. Since even an optimal frame-to-frame association method cannot compensate and recover from poor detection, it is clear that the quality of cell tracking depends on the quality of cell detection within each frame.

Methods

Cell detection performs poorly where the background is not uniform and includes temporal illumination variations, spatial non-uniformities, and stationary objects such as well boundaries (which confine the cells under study). To improve cell detection, the signal to noise ratio of the input image can be increased via accurate background estimation. In this paper we investigate background estimation, for the purpose of cell detection. We propose a cell model and a method for background estimation, driven by the proposed cell model, such that well structure can be identified, and explicitly rejected, when estimating the background.

Results

The resulting background-removed images have fewer artifacts and allow cells to be localized and detected more reliably. The experimental results generated by applying the proposed method to different Hematopoietic Stem Cell (HSC) image sequences are quite promising.

Conclusion

The understanding of cell behavior relies on precise information about the temporal dynamics and spatial distribution of cells. Such information may play a key role in disease research and regenerative medicine, so automated methods for observation and measurement of cells from microscopic images are in high demand. The proposed method in this paper is capable of localizing single cells in microwells and can be adapted for the other cell types that may not have circular shape. This method can be potentially used for single cell analysis to study the temporal dynamics of cells.

Introduction

The automated acquisition of huge numbers of digital images has been made possible due to advances in and the low cost of digital imaging. In many video analysis applications, the goal is the tracking of one or more moving objects over time such as human tracking, traffic control, medical and biological imaging, living cell tracking, forensic imaging, and security [1-7].
The possibility of image acquisition and storage has opened new research directions in cell biology, tracking cell behaviour, growth, and stem cell differentiation. The key impediment on the data processing side is that manual methods are, astonishingly, still widely practiced in areas such as cell biology which are outside the influence of most image processing research. The goal of our research, in general, is to address this gap by developing automated methods of cell tracking.
Although most televised video involves frequent scene cuts and camera motion, a great deal of imaging, such as medical and biological imaging, is based on a fixed camera which yields a static background and a dynamic foreground. Moreover, in most tracking problems it is the dynamic foreground that is of interest, hence an accurate estimation of the background is desired which, once removed, ideally leaves us with the foreground on a plain background. The estimated background may be composed of one or more of random noise, temporal illumination variations, spatial distortions caused by CCD camera pixel non-uniformities, and stationary or quasi-stationary background structures.
We are interested in the localization, tracking, and segmentation of Hematopoietic Stem Cells (HSCs) in culture to analyze stem-cell behavior and infer cell features. In our previous work we addressed cell detection/localization [8,9] and the association of detected cells [10]. In this paper cell detection and background estimation will be studied, with an interest in their mutual inter-relationship, so that by improving the performance of the background estimation we can improve the performance of the cell detection. The proposed approach contains a cell model and a point-wise background estimation algorithm for cell detection. We show that point-wise background estimation can improve cell detection.
There are different methods for background modelling, each of which employs a different method to estimate the background based on the application at hand, specifies relevant constraints to the problem, and makes different assumptions about the image features at each pixel, processing pixel values spatially, temporally, or spatio-temporally [11-23].
There is a broad range of biomedical applications of background estimation, each of which introducing a different method to estimate the background based on some specific assumptions relevant to the problem [12-14,24]. Close and Whiting [12] introduced a technique for motion compensation in coronary angiogram images to distinguish the arteries and background contributions to the intensity. They modelled the image in a region of interest as the sum of two independently moving layers, one consisting of the background structure and the other consisting of the arteries. The density of each layer varies only by rigid translation from frame to frame and the sum of two densities is equal to the image density.
Boutenko et. al [13] assumed that the structures of interest are darker than the surrounding immobile background and used a velocity based segmentation to discriminate vessels and background in X-ray cardio-angiography images, considering the faster vessel motion in comparison with the background motion.
Chen et. al [14] modelled the background of a given region of interest using the temporal dynamics of its pixels in quantitative fluorescence imaging of bulk stained tissue. They modelled the intensity dynamics of individual pixels of a region of interest and derived a statistical algorithm to minimize background and noise to decompose the fluorescent intensity of each pixel to background and the stained tissue contributions.
A simulation and analysis framework to study membrane trafficking in fluorescence video microscopy was proposed by Boulanger et. al [24]. They designed time-varying background models in fluorescence images and proposed statistical methods for estimating the model parameters. This method decides whether any image point belongs to the image background or a moving object.
Several segmentation and tracking methods are proposed for a broad range of biomedical applications, each of which introducing a different method to segment and/or track specific biological materials based on some specific assumptions relevant to the problem [25-28].
Cheng et. al used shape markers to separate clustered nuclei from fluorescence microscopy cellular images in a watershed-like algorithm [25]. Shape markers were extracted using H-minima transform. A marking function was introduced to separate clustered nuclei while geometric active contour was used for initial segmentation.
Gudla et. al proposed a region growing method for segmentation of clustered and isolated nuclei in fluorescence images [26]. They used a wavelet-based approach and a multi-scale entropy-based thresholding for contrast enhancement. They first oversegmented nuclei and then merged the neighboring regions into single nuclei or clustered nuclei based on area followed by automatic multistage classification.
A semi-automatic mean-shift-based method for tracking of migrating cell trajectories in vitro phase-contrast video microscopy was proposed by Debeir et. al [28]. They used mean-shift principles and adaptive combinations of linked kernels in the proposed method. They used this method for detection of different gray-level configurations. This method required manual initialization of the cell centroids on the first frame, it did not use temporal filtering or time-dependent feature, and it did not provide precise information on the cell boundaries and shapes.
Most tracking problems have an implicit, nonparametric model of the background to avoid making assumptions regarding the foreground. By developing a model for the background it is possible to find a classifier that labels each image pixel as background/not background; i.e., the foreground is identified as that which is not background. In contrast, the more focussed context of our cell tracking problem admits an explicit model of the foreground. Because of the low SNR of our problem, where illumination is limited to minimize cell phototoxicity, it is desired to remove all deterministic non-cell variations in the image (i.e., the background) before localizing the cells.
Some of the earlier works have integrated foreground detection and background estimation in a mutual framework, however most of the previous methods classify each pixel to either foreground or background, where their goal is the general segmentation of dynamic objects with no assumptions regarding the foreground. In contrast our goal is the localization of foreground objects, given specific assumptions that are integrated in the form of a foreground model.
In our proposed method, in place of classifying each pixel to either foreground or background, we estimate a single global background and do detection of foreground objects (but not pixel by pixel). Our proposed method addresses foreground detection and background estimation as inter-related processes, and take advantage of this inter-relation to improve the performance of cell detection. In the proposed algorithm, the background elements are removed from the scene frame by frame using a spatio-temporal background estimator while a probabilistic cell model is applied to the image sequence to localize cell centers. The spatio-temporal estimator has been applied to estimate the background in phase contrast image sequences taken from living Hematopoietic (blood) Stem Cells in culture, and leads to substantial improvements in cell localization and cell outline detection.

Materials

To produce the data for this study, HSC samples are first extracted from mouse bone marrow, then cultured in custom arrays of microwells. The cells were imaged using manual focusing through a 5× phase contrast objective using a digital camera (Sony XCD-900) and acquired by an IEEE 1394 standard (FireWire) connector. Images were sampled every three minutes over the course of several days. During imaging, chambers were maintained at 37°C, in a 5% CO2 humidified air environment.

Methods

Two original frames taken from a cropped well is depicted in Fig. 1(a-i) and 1(a-ii). Well cropping is often approximate and the well boundaries may be partially or completely visible in the cropped image sequence, as can be seen in Fig. 1(b-i). Modelling cells on a uniform, zero-mean background requires that any existing background be estimated and subtracted.

Robust Peak Recognition

Background

The waveform morphology of intracranial pressure pulses (ICP) is an essential indicator for monitoring, and forecasting critical intracranial and cerebrovascular pathophysiological variations. While current ICP pulse analysis frameworks offer satisfying results on most of the pulses, we observed that the performance of several of them deteriorates significantly on abnormal, or simply more challenging pulses.

Methods

This paper provides two contributions to this problem. First, it introduces MOCAIP++, a generic ICP pulse processing framework that generalizes MOCAIP (Morphological Clustering and Analysis of ICP Pulse). Its strength is to integrate several peak recognition methods to describe ICP morphology, and to exploit different ICP features to improve peak recognition. Second, it investigates the effect of incorporating, automatically identified, challenging pulses into the training set of peak recognition models.

Results

Experiments on a large dataset of ICP signals, as well as on a representative collection of sampled challenging ICP pulses, demonstrate that both contributions are complementary and significantly improve peak recognition performance in clinical conditions.

Conclusion

The proposed framework allows to extract more reliable statistics about the ICP waveform morphology on challenging pulses to investigate the predictive power of these pulses on the condition of the patient.

Atomic force microscopy

Background

Surface roughness is the main factor determining bacterial adhesion, biofilm growth and plaque formation on the dental surfaces in vivo. Air-polishing of dental surfaces removes biofilm but can also damage the surface by increasing its roughness. The purpose of this study was to investigate the surface damage of different conditions of air-polishing performed in vitro on a recently introduced dental restorative composite.

Methods

Abrasive powders of sodium bicarbonate and glycine, combined at different treatment times (5, 10 and 30 s) and distances (2 and 7 mm), have been tested. The resulting root mean square roughness of the surfaces has been measured by means of atomic force microscopy, and the data have been analyzed statistically to assess the significance. Additionally, a fractal analysis of the samples surfaces has been carried out.

Results

The minimum surface roughening was obtained by air-polishing with glycine powder for 5 s, at either of the considered distances, which resulted in a mean roughness of ~300 nm on a 30 × 30 μm2 surface area, whereas in the other cases it was in the range of 400-750 nm. Both untreated surfaces and surfaces treated with the maximum roughening conditions exhibited a fractal character, with comparable dimension in the 2.4-2.7 range, whereas this was not the case for the surfaces treated with the minimum roughening conditions.

Conclusions

For the dental practitioner it is of interest to learn that use of glycine in air polishing generates the least surface roughening on the considered restorative material, and thus is expected to provide the lowest rate of bacterial biofilm growth and dental plaque formation. Furthermore, the least roughening behaviour identified has been correlated with the disappearance of the surface fractal character, which could represent an integrative method for screening the air polishing treatment efficacy.

Background

Dental caries is the most widespread disease, since it affects about 95% of the world population at some point during their lives [1]. Caries follow bacterial plaque formation, which arises after the increase in surface area accessible for bacterial adhesion due to the surface roughness associated with defects or damage of the dental structures [2-5]. In fact, the predominant role of surface roughness for bacterial adhesion with respect to other cofactors such as surface energy has already been clarified in the literature [6].
Traditional hand instruments or oscillating scalers used to remove dental plaque usually cause a significant increase in roughness of the underlying dental surfaces [3,7] made of either pristine or restorative material, causing in turn a faster re-growth of plaque in the time period following the treatment. Therefore, air-polishing (AP) with simultaneously ejected water and pressurized air containing abrasive powders has been introduced in dental cleaning, and is now routinely applied [8-10]. Sodium bicarbonate powder is largely used for AP [10]. Recently, glycine powder has also been tested in several in vitro, ex vivo and in vivo studies, demonstrating a good clinical efficacy and low abrasive effect [7,9,11-14].
Despite being the least invasive technique for the dental surfaces, even AP may result in surface damage [3,15], when the working parameters of type of abrasive powder, spraying time and distance are not correctly set. To date, AP surface effects have been studied by means of laser scanners or profilometers. One recognized advantage of these techniques lies in their ability to allow large areas characterization containing both untreated and AP treated regions. This makes it possible to measure the resulting defect depth and the absolute loss of material, and thus evaluate the integrity of the dental structures [12]. However, laser scanners and profilometers do not permit high-resolution measurement of the surface roughness. In this work we have performed an in vitro analysis on the effect of AP on the surface of a commercial material used in dental restoration using atomic force microscopy (AFM), which allows for a high resolution, direct quantitative characterization of the surface roughness [16,17]. Firstly, the AP treatment conditions resulting in the lowest dental structure damage - i.e. surface roughening - have been identified. Secondly, the effect of the different AP treatment conditions on the possible fractal character of the surface roughness has been analyzed. Surface feature patterns exhibit a fractal character when they are self-affine, meaning that similar patterns can be found when zooming in or out to different orders of magnitude of the lateral field of view. The fractal analysis has already been applied to dental surfaces for classification of dental patterns of different species in zoology [18] and for characterization of the wear patterns of bruxism [19], but to our knowledge has never been used in the analysis of AP of dental composite surfaces. This mathematical tool can provide a new way to account for the complexity of the topographical pattern of the treated material surface, which can in turn depend on the AP conditions. In fact, it is generally accepted that the measures of roughness from the distribution of heights z alone without any information on their spatial localization on the (x, y) plane is insufficient to completely describe the surface roughness.

Design of splints based on the NiTi

Background

The proximal interphalange joint (PIP) is fundamental for the functional nature of the hand. The contracture in flexion of the PIP, secondary to traumatisms or illnesses leads to an important functional loss. The use of correcting splints is the common procedure for treating this problem. Its functioning is based on the application of a small load and a prolonged stress which can be dynamic, static progressive or static serial.
It is important that the therapist has a splint available which can release a constant and sufficient force to correct the contracture in flexion. Nowadays NiTi is commonly used in bio-engineering, due to its superelastical characteristics. The experience of the authors in the design of other devices based on the NiTi alloy, makes it possible to carry out a new design in this work - the production of a finger splint for the treatment of the contracture in flexion of the PIP joint.

Methods

Commercial orthosis have been characterized using a universal INSTRON 5565 machine. A computational simulation of the proposed design has been conducted, reproducing its performance and using a model "ad hoc" for the NiTi material. Once the parameters have been adjusted, the design is validated using the same type of test as those carried out on commercial orthosis.

Results and Discussion

For commercial splint the recovering force falls to excessively low values as the angle increases. Angle curves for different lengths and thicknesses of the proposed design have been obtained, with a practically constant recovering force value over a wide range of angles that vary between 30° and 150° in every case. Then the whole treatment is possible with only one splint, and without the need of progressive replacements as the joint recovers.

Conclusions

A new model of splint based on NiTi alloy has been designed, simulated and tested comparing its behaviour with two of the most regularly used splints. Its uses is recommended instead of other dynamic orthosis used in orthopaedics for the PIP joint. Besides, its extremely simple design, makes its manufacture and use on the part of the specialist easier.

Background

The proximal interphalange joint (PIP) is fundamental for the functional nature of the hand. It is considered to be the functional epicentre, since 85% of total encompassment when an object is grasped depends on this joint [1]. The contracture in flexion of the PIP, secondary to traumatisms or illnesses leads to an important functional loss.
The use of correcting splints is the common procedure for treating this problem. Its functioning is based on the application of a small load and a prolonged stress which can be dynamic, static progressive or static serial [2]. Despite the force applied being small and progressive the neighbouring joints should be evaluated before its use in patients with systematic illnesses, as the splints increase the stress on the joints and can cause finger edema [3]. This progressive application of forces on the PIP joint stimulates the histic changes, which enable the elongation of the capsuloligamentous structures until the correction of the deformity is achieved [4].
The straightening forces developed by static splints were analysed by Wu [5] and those of the dynamic splints by Fess [6]. Both systems base the biomechanical action on the application of three parallel forces. Later analysis [3] consider that the force released by both systems is similar, and present the pressure exerted on the back of the damaged joint (PIP), greater in the static systems, as the main inconvenience of both [3].
The materials used in both types of splints are to a great extent thermoplastics, which in the short term suffer a change in resistance [7]; New materials such as neoprene have been proposed by other authors [8], although with this proposed model the inconvenience of covering all of the finger arises, something which can generate the edema and be counterproductive.
Considering the effectiveness of the two systems, good results have been published for both [9-12]. However it is fundamental to know the biomechanics of each system in order to produce personalised devices adapted to the characteristics of each patient [3].
It is important that the therapist has a splint available which can release a constant and sufficient force to correct the contracture in flexion. Nowadays NiTi is commonly used in bio-engineering, due to its superelastical characteristics and its shape memory [13]. The experience of the authors in the design of other devices based on the NiTi alloy [14-17], makes it possible to carry out the proposed design in this work - the production of a finger splint for the treatment of the contracture in flexion of the PIP joint.
This paper describes the characteristics of the splint designed, comparing its biomechanical behaviour with that of commercial dynamic splints regularly used to treat the stiffness of the PIP joint.

Methods

The first step consisted in characterising the biomechanical properties of the splints that were to be used as reference. Two of the most regularly used splints have been chosen: the LMB Spring Finger Extension Splint (splint 1) and the LMB Spring-Coil Finger Extension Splint (splint 2).
We are dealing with two simple designs which basically consist of a torsion spring with two angled arms which make it possible to fix and lock onto the finger (Figs. 1a and 1b). The spring restoration torque is the origin of the forces applied to the extremes of the splint, which are balanced with the reaction in the central section (Fig. 1c).

Review of Handbook of Physics

I have always been astonished whenever I open an encyclopedic book, imagining the effort it took to review so many closely related but actually different scientific contributions, and at the same time to provide the detailed interpretations needed to satisfy the experts in the field. I felt the same respect when I picked up the Handbook of Physics in Medicine and Biology, which was edited by Robert Splinter and published by the Taylor & Francis Group.
I quickly read over the entire book, then read parts of it in detail. My deep impression is this is an excellent work by a highly competent team. The book chapters follow logically from the properties of the cell membrane through sensors and electroreception, biomechanics and fluid dynamics to the recording of bioelectrical signals, bioelectric impedance analysis, X-ray and computed tomography, magnetic resonance imaging, nuclear medicine, ultrasonic and thermographic imaging. I am less familiar with some of the microscopy and optical techniques as well as the lab-on-a-chip and the biophysics of regenerative medicine, which covers a breadth material that can hardly be learned by any single person. Therefore, I will confine my comments to the book's presentation of techniques that I have used in the last decades, keeping in mind the impossibility of achieving a perfect balance between providing foundations of the subjects for students and advanced discussion of the topics for professionals.
The Handbook's discussion of fundamental properties of the cell membrane is well organized. However, I missed some applied topics of interest to bioengineers such as electrochemotherapy - a modern method of treating tumors that uses short but intense pulses of electric fields to permeabilize (electroporate) cell membranes to enhance the administration of cytotoxic drugs.
The Handbook also provides methodical, perspicuous and very useful discussions of action potential transmission and volume conduction. This is true also for the chapter devoted to the physics of reception which gives profound information about the different senses: taste, smell, touch, pain, as well as hearing and vision. I was particularly interested in a short chapter dealing with biomedical engineering contributions to medical decision making. The progress in electronics, computer science and biomedical technologies has created higher expectances of enhanced technical contributions to medical decision making. In my opinion, these expectations are somewhat exaggerated and reflect overenthusiasm by their proponents. In fact, most expert systems for medical decision making are able to generate diagnostic conclusions more accurately than those produced by ordinary clinicians. However, a great diagnostician can sometimes factor in surprising but not statistically relevant data that might be a key consideration for an accurate decision. This is due to the specific thought processes of skilled physicians, who do not go in sequence through an algorithm as does a computer program. By the way, the mathematical requirements of Bayes' theorem (a fundamental theorem that governs the accuracy of medical decision making) for independence and invariability of the events as well as the presence of whole-group marginal events are not discussed in this chapter.
The Handbook offers a compressed discussion of the complex topic of biomechanics that introduces the basic information of the three muscle types. It includes a very interesting paragraph, which describes the technology of micro electro-mechanical systems as they are used in artificial muscles.
Also included is necessary information about the functional anatomy of the heart, the cardiac cycle (systole and diastole) and the mechanics of the cardiovascular system. The measurement of arterial blood pressure is given in terms of Korotkoff's sounds only. A parallel description of blood pressure measurement using the oscillatory principle would be very valuable, since both techniques are almost equally used in practice.
The chapters on fluid dynamics of the cardiovascular system and the discussion of stroke volume (SV) and the derivative cardiac output provoked me to share some considerations. As it is well known, there are a variety of methods to measure SV, e.g. the dye dilution, Doppler ultrasound, sphygmomanometry and tonometry etc, as well as some invasive methods. No one of these is recognized as the "gold standard" for accuracy, and each has its own supporters. Long ago my colleagues and I used impedance cardiography for monitoring the SV in the intensive care units in Bulgaria. The onset of the systolic wave, its peak and the most negative deflection corresponding to the closing of the aortic valve were detected from the time rate of change of the impedance signal. They were used to derive the SV according to the Korotkoff's formula but not before the fiducial points shown on the screen were approved by the operator. Further, the method we employed allowed for the possibility of correcting coefficients for adjusting the obtained data, virtually corresponding to another method if it has been applied once at the beginning to the monitored patient.
The Handbook includes a very nice presentation of the anatomy and physics of respiration, including the measurement of lung volumes, gas exchange and transport in the blood. There is also a superb review of ventilation. I missed a discussion of the problem of estimating the optimal moment of transition from mandatory to spontaneous breathing. Perhaps the answer to this problem is beyond the scope of a handbook.
As expected, much space is devoted to the recording of bioelectric signals. I am happy with the chapter dealing with the electrocardiogram (ECG), which includes the entire sequence of topics, from placement of electrodes on the body, lead formation of the acquired signals, description of the waves' origin, wave shape and duration, intervals and segments, continuing on to a description of abnormalities of the ECG in several cardiac diseases. It appears to me that a little more information on the rhythm disorders would have been suitable; only one figure shows the shape of ectopic beats within a bigeminic epoch. I would have wanted a few words devoted to the disturbances accompanying the ECG signal acquisition - power line interference, drift, tremor and other artefacts, which may hamper or even distort the morphological interpretation of ECG waveforms.
The basic concepts and applications of electroencephalography (EEG) are also well covered. The authors of that chapter have emphasized evoked and event-related brain potentials together with the signal averaging techniques necessary for their correct interpretation. The EEG is routinely analyzed by frequency analysis. The chapter would benefit by including a discussion of Fourier analysis with a few sentences about some inconveniences of that approach, namely the loss of the time parameter and the energy leakage in the spectrum due to incorrect length of the selected epoch. These disadvantages can be overcome using the method of the first zero derivatives.
I also appreciated the presentation of X-ray instrumentation, computed tomography (CT), magnetic resonance imaging (MRI) and ultrasound imaging, which covers a good part of the biomedical imaging discussed in the Handbook; other highly useful technologies presented are positron emission tomography, thermography and different types of microscopy. The chapter on X-ray imaging lacks the definitions of some very important parameters of the X-ray image - brightness, contrast and unsharpness. The central slice theorem is well presented, but it would have been helpful to have enlarged the discussion of algorithm for CT image reconstruction e.g. back projection, iterative and analytic algorithm. The reconstruction of the MRI image is totally ignored.
In conclusion, the authors' team and the Taylor & Francis Group should be congratulated for this excellent handbook. My minor criticisms represent a particular look at the material and are not intended to disparage the overall high worth of the book. I suggest to readers of BioMedical Engineering OnLine: keep this book close at hand and do not hesitate to use it frequently.

Robust Peak Recognition

Background

The waveform morphology of intracranial pressure pulses (ICP) is an essential indicator for monitoring, and forecasting critical intracranial and cerebrovascular pathophysiological variations. While current ICP pulse analysis frameworks offer satisfying results on most of the pulses, we observed that the performance of several of them deteriorates significantly on abnormal, or simply more challenging pulses.

Methods

This paper provides two contributions to this problem. First, it introduces MOCAIP++, a generic ICP pulse processing framework that generalizes MOCAIP (Morphological Clustering and Analysis of ICP Pulse). Its strength is to integrate several peak recognition methods to describe ICP morphology, and to exploit different ICP features to improve peak recognition. Second, it investigates the effect of incorporating, automatically identified, challenging pulses into the training set of peak recognition models.

Results

Experiments on a large dataset of ICP signals, as well as on a representative collection of sampled challenging ICP pulses, demonstrate that both contributions are complementary and significantly improve peak recognition performance in clinical conditions.

Conclusion

The proposed framework allows to extract more reliable statistics about the ICP waveform morphology on challenging pulses to investigate the predictive power of these pulses on the condition of the patient.

Decoding hand movement velocity

Background

Decoding neural activities associated with limb movements is the key of motor prosthesis control. So far, most of these studies have been based on invasive approaches. Nevertheless, a few researchers have decoded kinematic parameters of single hand in non-invasive ways such as magnetoencephalogram (MEG) and electroencephalogram (EEG). Regarding these EEG studies, center-out reaching tasks have been employed. Yet whether hand velocity can be decoded using EEG recorded during a self-routed drawing task is unclear.

Methods

Here we collected whole-scalp EEG data of five subjects during a sequential 4-directional drawing task, and employed spatial filtering algorithms to extract the amplitude and power features of EEG in multiple frequency bands. From these features, we reconstructed hand movement velocity by Kalman filtering and a smoothing algorithm.

Results

The average Pearson correlation coefficients between the measured and the decoded velocities are 0.37 for the horizontal dimension and 0.24 for the vertical dimension. The channels on motor, posterior parietal and occipital areas are most involved for the decoding of hand velocity. By comparing the decoding performance of the features from different frequency bands, we found that not only slow potentials in 0.1-4Hz band but also oscillatory rhythms in 24-28Hz band may carry the information of hand velocity.

Conclusions

These results provide another support to neural control of motor prosthesis based on EEG signals and proper decoding methods.