ETHICAL AND LEGAL PRINCIPLES OF BIOMEDICAL STUDIES OF HUMANS AND ANIMALS.
THE CONCEPT OF BIOSAFETY AND RISK OF BIOMEDICAL TECHNOLOGIES.
Bioethics is the study of controversial ethics brought about by advances in biology and medicine. Bioethicists are concerned with the ethical questions that arise in the relationships among life sciences, biotechnology, medicine, politics, law, and philosophy. It also includes the study of the more commonplace questions of values ("the ethics of the ordinary") which arise in primary care and other branches of medicine.
Parallel to the public discussion on the benefits and risks of nanotechnology, a debate on the ethics of nanotechnology has begun. It has been postulated that a new “nano-ethics” is necessary. In this debate, the – positive as well as negative – visionary and speculative innovations which are brought into connection with nanotechnology stand in the foreground. In this contribution, an attempt is made to discover new ethical aspects of nanotechnology in a more systematic manner than has been the case. It turns out that there are hardly any completely new ethical aspects raised by nanotechnology. It is much rather primarily a case of gradual shifts of emphasis and of relevance in questions which, in principle, are already known and which give reason for ethical discussions on nanotechnology. In a certain manner, structurally novel ethical aspects arise through the important role played by visions in the public discourse. New questions are also posed by the fact that previously separate lines of ethical reflection converge in the field of nanotechnology. The proposal of an independent “nano-ethics”, however, seems exaggerated.
In view of the revolutionary potential attributed to nanotechnology,1,2 it is not surprising that this technology has found great interest in the media and in the public at large. Ethical, legal, and social implications (ELSI) are already being elaborated by commissions and study groups.
Ethical reflection on nanotechnology has, in fact, already coined new terms such as “nano-ethics”,3,4 but has, to date, hardly accomplished more than to proclaim a need for ethics in and for nanotechnology.5 The ethical aspects discussed in the – to the present, few – treatises are much rather evidence of a tentative approach to a relatively new field of science and technology than of systematic analysis. Criteria for determining why certain topics, such as selfreplicating nanorobots or nanoparticles, should be ethically relevant are not given. The normative fuzziness of unclear criteria joins the cognitive fuzziness caused by a lack of knowledge of the real possibility of the technical innovations concerned. The result, as the working hypothesis of this contribution has it, discloses, above all, a diffuse uneasiness regarding rapid scientific advances in nanotechnology.
In view of this situation, the purpose of this contribution consists primarily in studying current and foreseeable developments in nanotechnology from the viewpoint of ethics. Which developments are ethically relevant? Are there ethical questions which have already been answered by current discussions in the ethics of technology or in bioethics? Are there developments which pose completely new questions? To this end, it is necessary – besides inquiry into the present state of the discussion – to look for criteria for deciding when something is ethically relevant. These criteria are then applied to the field of nanotechnology, and the ethical challenges are “mapped out”. This is done on the basis of an evaluation of the current literature, and of pertinent references on the Internet. After answering the question on the necessity of nano-ethics, comments are subsequently made on the role to be played by ethics in the further development of nanotechnology.
ETHICS FOR NANOTECHNOLOGY – RELEVANCE CRITERIA
It has been controversial for quite some time whether science and engineering have any morally relevant content at all (and could therefore be a subject of ethical reflection).
Into the 1990’s, technology was held to be value-neutral. In numerous case studies, however, the normative background of decisions on technology (even those made in the laboratory) have since been recognized and made a subject of reflection.6 The basic assumption of this development is not to see technology solely as a sum of abstract objects or processes, but to take its embeddedness in societal processes seriously.
Technology is not nature, and does not originate of itself, but is consciously produced to certain ends and purposes – namely, to bring something about which wouldn’t happen of itself. Technology is therefore always embedded in societal goals, problem diagnoses, and action strategies. In this sense, there is no “pure” technology, i.e. a technology completely independent of this societal dimension.
Normative aspects of science and technology, in a morally pluralistic society, unavoidably lead to societal debates at the least, and often also to conflicts over technology. As a rule, what is held to be desirable, tolerable, or acceptable is controversial in society. Open questions and conflicts of this type in the context of science and technology are the point of departure for the ethics of technology.
Technology conflicts are, as a rule, not only conflicts over technological means (e.g., in questions of efficiency), but are also conflicts over visions of the future, of concepts of humanity, and on views of society. The role of the ethics of technology consists of analysis of the normative structure of technology conflicts and the search for rational, argumentative, and discursive methods of resolving them.
Even if technology is basically beset with values, most of the technically relevant decisions can be classified as the “standard case” in the following sense: they don’t subject the normative aspects of the basis for the decision (criteria, rules or regulations, goals) to specific reflection, but assume them to be given for the respective situation, and accept the frame of reference they create. In such cases, no explicit ethical reflection is, as a rule, necessary, even if normative elements self-evidently play a vital role in these decisions – the normative decision criteria are clear, acknowledged, and unequivocal. It is then out of the question that this could be a case of conflict with moral convictions or a situation of normative ambiguity – the information on the normative framework can be integrated into the decision by those affected and by those deciding on the basis of axiological information, without analyzing it or deliberating on it. The (national and international) legal regulations, the rules of the relevant institutions (e. g., corporate guidelines), where applicable, the code of ethics of the professional group concerned, as well as general societal usage, are elements of this normative framework.
The requirements to be set in the normative framework in order that it can be accepted as a “standard case” can be operationalized according to the following criteria:
• Pragmatic completeness: the normative framework has to treat the decision to be made fully with regard to normative aspects;
• Local consistency: there must be a “sufficient” measure of consistency between the normative framework’s elements;
• Unambiguity: among the relevant actors, there must be a sufficiently consensual interpretation of the normative framework;
• Acceptance: the normative framework must be accepted by those affected as the basis for the decision;
• Compliance: the normative framework also has to be complied with in the field concerned.
There are such well-established “standard” situations in many fields (e.g., in decision-making processes in public administration or in private businesses). Technical innovations and scientific progress, however, can challenge such situations by presenting new questions or by shaking views previously held to be valid.
This is then the entry point for ethical reflection in questions of science and engineering, for the explicit confirmation, modification, or augmentation of the normative framework.
Whether there are new challenges for ethics in nanotechnology and what they might be, will be investigated against this background below. In order to clarify the question of ethical relevance, an examination of the topics treated in previous papers judged on the basis of systematic criteria for the ethical relevance of science and technology was carried out.
The guiding questions in this study were:
• Which are the ethical aspects in the sense defined above, especially which are the genuine ethical aspects in the subjects for ethics in nanotechnology named in current publications?
• Is relevant and sufficiently evident knowledge available for the assessment of scientific and technical developments in nanotechnology or of their use (as addressed with regard to ethics)?
• Which of the ethical aspects of nanotechnology are specific for nanotechnology?
This method permits a clear determination of the questions to which ethical reflection can make a real contribution. It then also permits an estimation about to what extent an independent “nano-ethics” would be justified. The resulting “map” of ethical aspects described below is a survey of the use of nanotechnology as regards aspects of content, and includes partially overlapping fields of content. These fields have the fact in common that nanotechnology will play a constitutive role in the foreseeable advances in their areas.
In practical philosophy and in ethics, nanotechnology has to date seldom been made a subject of discussion. [NT = nanotechnology]: “While the number of publications on NT per se has increased dramatically in recent years there is very little concomitant increase in publications on the ethical and social implications to be found.”
Some postulate a need for ethics and make reference, above all, to remote visions (for example, the abolition of aging or “runaway nanorobots”3,9). Certain terms, such as privacy, man-machine relationship, or equity are often cited (e.g.,10,11). Systematic studies, which could do justice to the diversity and breadth of nanotechnology, haven’t yet been presented. At the same time, concern over the detrimental consequences of this deficit grows: “The lack of dialogue between research institutes, granting bodies and the public on the implications and directions of NT may have devastating consequences, including public fear and rejection of NT without adequate study of its ethical and social implications.”
Nanoparticles – Opportunities versus Risks
A vast potential market for nano-based products is seen in the field of new materials. By means of an admixture or specific application of nanoparticles, some new properties of materials can be brought about, for instance, in surface treatment. If making new technical functions and performance factors is a common motivation for scientific and technological progress, ethical questions on these advances pose themselves as the “classical” question of the possible side effects of these developments.
Artificially produced nanoparticles can be disseminated into the environment or enter the human body as emissions during production or by the daily use of products including nanoparticles. Nanoparticles could eventually be transported as aerosols over great distances and be distributed diffusely. They could enter the human body by way of the lungs, through the skin, or the digestive tract. How they interact in spreading, their impact on health and on the environment, in particular, potential long-term effects, are at present almost unknown. This applies also and above all for substances which don’t occur in the natural environment, such as fullerenes or nanotubes. As far as the potential proliferation of nanoparticles is concerned, aspects such as mobility, reactivity, persistence, lung infiltration, solubility in water, etc., have to be taken into consideration.
Questions of toxicity for the environment and for humans, on nano-material flow, on their behavior in spreading throughout various environmental media, on their rate of degradation, and their consequences for the various conceivable targets are, however, not ethical questions. In these cases, the pertinent empirical-scientific disciplines, such as toxicology or environmental chemistry, are competent. Their results would be of interest from the viewpoint of ethics as soon as the available empirical facts can be consulted to establish which practical consequences they have for working with nanoparticles:
• What follows from our present lack of knowledge about the possible side effects of nanoparticles? One radical consequence, namely, a moratorium on putting nanoparticles onto the market, as it would probably follow from Hans Jonas’s ethics of responsibility according to the priority of the negative prognosis, has already been demanded.
• More generally formulated: is the precautionary principle relevant in view of a lack of knowledge and what would follow from the answer?
• Which role do the – doubtlessly considerable – opportunities of nanoparticlebased products play in considerations of this sort? According to which criteria can judgement of benefits and hazards be based when the benefits are (relatively) concrete, but the hazards are hypothetical?
• Are comparisons of the risks of nanoparticles with types of risk known from other areas possible, in order to learn from them? Can criteria for assessing the nanoparticle-risks be gained from experience in developing new chemicals or medicines? Which normative premises enter into such comparisons?
• Is the discussion on threshold values and environmental standards16 transferable to nanoparticles? What about the acceptability or tolerability of risks? Which methods of determining critical values come into question, and how do they relate to ethical questions or questions of democratic theory?
The contribution of ethics to this subject therefore lies in a value judgement of the situation (relationship of reliable knowledge to the degree of uncertainty), to clarification of the comparability to other types of risk, and by disclosing the normative presuppositions and implications entering into it, as well as to the normative basis for practical consequences. These questions of the acceptability and comparability of risks, the advisability of weighing up risks against opportunities, and the rationality of action under uncertainty are, without doubt, of great importance in nanotechnology. A new field of application is developing here for the ethics of science and technology; the type of questions posed, however, is well-known from other discussions of risks (exposure to radiation; chemicals).
Sustainability: Just Distribution of Opportunities and Risks
Possible side effects of nanotechnology of a completely different type result from considerations of equity. In particular, in connection with sustainability17 are ethical questions of the distribution of the opportunities for the use of as well as the spatial and temporal distribution of the opportunities and risks of nanotechnology: “Nanotech offers potential benefits in areas such as biomedicine, clean energy production, safer and cleaner transport and environmental remediation: all areas where it would be of help in developing countries. But it is at present mostly a very high-tech and costintensive science, and a lot of the current research is focused on areas of information technology where one can imagine the result being a widening of the gulf between the haves and the have-nots”. In this respect, we have to distinguish between intragenerational and intergenerational aspects.
The distribution of the use of natural resources between present and future generations belongs to the intergenerational aspects. Appreciable relief for the environment is expected from the use of nanotechnology: savings of material resources, reduction of the incidence of environmentally detrimental by-products, improvement of the efficiency of energy transformation, reduction of energy consumption, and the elimination of ecologically deleterious materials from the environment.
Decisive for the assessment of nanotechnology or of product lines based on nanotechnology from the viewpoint of sustainability is that technology accumulates positive and negative contributions to sustainability throughout its entire “lifetime”, which extends from the primary sources of raw materials via transportation and processing to consumption, and finally ends with their disposal as waste.19 The entire life cycle of a technology is therefore decisive for the assessment of the technology’s sustainability. But nanotechnology is at present in many branches still in an early phase of development. For this reason, we can only speak of nanotechnology’s sustainability potentials.18 There is no guarantee that technical sustainability potentials will turn out to be real contributions to sustainable development. The discussion of sustainability potentials can, however, be employed constructively with regard to technological development, if further development is accompanied by ethical reflection on questions of distribution between present and future exploitation of nature.
Intragenerational problems of distributive justice present themselves basically in every field of technical innovation. Because scientific and technical progress requires considerable investments, it usually takes place where the greatest economic and human resources are already available. Technical progress increases existing inequalities of distribution. This can be illustrated on the example of nanotechnology in medicine.18 Nanotechnology-based medicine will, in all probability, be expensive medicine. Questions of equity and of access to (possible) medical treatments could become urgent in at least two respects: within industrialized societies, existing inequalities in access to medical care could be exacerbated by a highly technicized medicine making use of nanotechnology, and – with regard to less developed societies – because likewise, already existing and particularly dramatic inequalities between technicized and developing nations could be still further increased. Apprehensions with regard to both of these types of a potential “nano-divide” (after the well-known “digital divide”) are based on the assumption that nanotechnology can not only lead to new and greater options for individual self-determination (e. g., in the field of medicine), as well as to considerable improvement of the competitiveness of national economies. Current discussions on distributive justice on the national and on the international level (in the context of sustainability as well) are therefore likely to gain new relevance with regard to nanotechnology.
Both of the aspects described are, however, not really new ethical aspects of technology, but are rather intensifications of problems of distribution already rife. Problems of equity belong in principle to the ethical aspects of modern technology.
The Private Sphere and Control
Another field regularly mentioned among the ethical aspects of nanotechnology is the threat to privacy through new monitoring- and control technologies. Nanotechnology offers a range of possibilities for gathering, storing, and distributing personal data to an increasing extent. In the course of miniaturization, a development of sensor and memory technology is conceivable which, unnoticed by its “victim”, drastically increases the possibilities for acquiring data. Furthermore, miniaturization and networking of observation systems could considerably impede present control methods and data protection regulations, or even render them obsolete. For the military, new possibilities for espionage are opened. Passive observation of people could, in the distant future, be complemented by actively manipulating them – for instance, if it would be possible to gain direct technical access to their nervous system or brain.
These scenarios are regarded by some to be not only realistic, but even certain:
“But what is not speculation is that with the advent of nanotechnology invasions of privacy and unjustified control of others will increase.” Underlying this opinion is an only thinly veiled technological determinism: “When new technology provides us with new tools to investigate and control others we will use them.” Within the private sphere, health is a particularly sensitive area. The development of small analyzers – the “lab on a chip” – can make it possible to compile comprehensive personal diagnoses and prognoses on the basis of personal health data. Stringent standards for data protection and for the protection of privacy therefore have to be set. Without sufficient protection of their private sphere, people are rendered manipulable, their autonomy and freedom of action are called in question.
The “lab-on-a-chip”-technology can facilitate not only medical diagnoses, but can also make fast and economical comprehensive screening possible (op. cit.). The rapid decoding of complete individual genetic dispositions can come within the reach of normal clinical work or of clinic-external services. Everyone could let him- or herself be tested, for example, for genetic dispositions for certain disorders – or could be urged by his/her employer or insurance company to do so. In this manner, individual persons could find themselves put under social pressure, their freedom of action would be impaired. In addition, it would have to be clarified, how one has to deal with results which possibly depress the patients affected over longer periods of time, fearing the impending outbreak of a serious disease (which possibly never occurs). How to deal with – presumably sometimes considerable – uncertainties of diagnoses is, in this connection, also an important aspect.
Questions of privacy, of monitoring and controlling people are doubtlessly ethically relevant. The current discussions on restricting civil liberties for the sake of combatting terrorism are taking place against a background in which normative criteria play a pivotal role, and in which there are conflicts over this subject. On the other hand, the political dimension of these questions stands prominently in the foreground, for instance, on estimations of the immediacy of the danger and on statements on the assumed problem-solving capacity of measures proposed. If it should come to an endangerment of the private sphere through nanotechnology, the ethical dimension would be more tangible. And nonetheless, it would even in this case be more likely to be kept in the background, while the central discussion would certainly much rather be concerned with the context of data protection, in which we have already gained considerable experience.
But all of these questions of monitoring and of data protection are not posed exclusively by nanotechnology. Even without nanotechnology, observation technologies have reached a remarkable stage of development which poses questions on the preservation of the private sphere. Even today, so-called smart tags, based on RFID-technology (Radio Frequency Identification), are being employed for access control, as ticketing, e. g., in public transportation and in logistics. These objects have at present a size of several tenths of a millimetre in each dimension, so that they are practically unnoticeable to the naked eye. Further miniaturization will permit further reduction in size and the addition of more functions – without nanotechnology being needed –, but nanotechnology will promote and accelerate these developments.
The ethically relevant questions on a right to know or not to know, on a personal right to certain data, on a right to privacy, as well as the discussions on data protection and on possible undesirable inherent social dynamisms, and, in consequence, of a drastic proliferation of genetic and other tests, have been a central point in the bio- and medical-ethical discussions for quite a while. Nanotechnological innovations can accelerate or facilitate the realization of certain technical possibilities, and therefore increase the urgency of the problematics of the consequences; in this area, however, they don’t give rise to qualitatively new ethical questions. Crossing the Border between Technology and Life Basic life processes take place on a nano-scale, because life’s essential building-blocks (such as proteins, for instance) have precisely this size. By means of nanotechnology, biological processes are made nanotechnologically controllable. Molecular “factories” (mitochondria) and “transport systems”, which play an essential role in cellular metabolism, can be models for controllable bio-nanomachines. Nanotechnology on this level could permit the “engineering” of cells. An intermeshing of natural biological processes with technical processes seems to be conceivable. The classical barrier between technology and life is increasingly being breached and crossed. One speaks of elements of living organisms in classical mechanics’ language: as factories, rotors, pumps, and reactors.
This is, at first glance, a cognitively and technically extremely interesting process, with a great deal of promise. The technical design of life processes on the cellular level, direct links and new interfaces between organisms and technical systems portend a new and highly dynamic scientific and technological field. Diverse opportunities, above all, in the field of medicine, stimulate research and research funding.
New ethical aspects are certainly to be expected in this field. Their concrete specification, however, will only be possible when research and development can give more precise information on fields of application and products. The corresponding discussions of risks could have structural similarities to the discussion on genetically modified organisms. It could come to discussions about safety standards for the research concerned, about “field trials”, and release problems. The danger of misuse will be made a topic of debate, such as, for example, technically modifying viruses in order to produce new biological weapons. A wide range of future ethical discussions is opening, for which at present there is insufficient practical background for concrete reflection. In this field of nanotechnology (not in that of the nanoparticles, [sec. 3.1]), similar developments concerning resistance in society could be feared as in the case of genetic engineering.
A new area that is practically as well as ethically interesting consists of making direct connections between technical systems and the human nervous system. There is intensive work on connecting the world of molecular biology with that of technology. An interesting field of development is nanoelectronic neuro-implants (neurobionics), which compensate for damage to sensory organs, or to the nervous system, or increase the performance capacity of these organs and broaden the spectrum of human perception. Microimplants could restore the functions of hearing and eyesight. Even today, simple cochlear or retina implants, for example, can be realized. With progress in nanoinformatics, these implants could approach the smallness and capabilities of natural systems. Because of these undoubtedly positive goals, ethical reflection could, in this case, concentrate above all on the definition and prevention of misuse. Technical access to the nervous system, because of the possibilities for manipulation and control which it opens, is a particularly sensitive issue.
Extrapolating these lines of development into the realm of speculation, the convergence of technology and humanity, the conceivability (in the sense of a purethought possibility) of “cyborgs” as technically enhanced humans or humanoid technology could be problematized.1 Developments of this type raise the question of humanity’s self-concept, which is of great ethical relevance. In nanotechnological visions, aspects repeatedly occur which blur the boundary between what human beings are and what they create with the help of technical achievements and applications. Such visions pose the question, to what extent technical or partly technical can partly biologically-constructed man-machine chimeras lay claim to the status of a person. An entire spectrum of anthropological and ethical questions follows from out of this question aspect. In some US-American churches – in contrast to the Transhumanism discussion – precisely this aspect of nanotechnology is a central theme.
Nanotechnology acts as a playing-field on which various philosophies of life compete. In spite of the speculative nature of the subject, ethical reflection doesn’t seem to be premature. One needn’t see the main concentration of nanotechnology’s ethical aspects in this area, but one can certainly find enough indications that scientific and technical progress will intensify the – at present, non-existent – urgency of these questions in the coming years. In particular, advances in brain science and developments in “converging technologies”2 lead to this expectation – and would justify ethical reflection “in advance”. But the insight that the questions raised aren’t really specific for nanotechnology also applies here. Since the 1980’s, these subjects have been repeatedly discussed in the debates on artificial intelligence and on artificial life.
The Improvement of the Human Being
With the transgression of the boundary between technology and living beings from both sides, our understanding of what distinguishes human beings and which relationship mankind assumes to be its natural physical and psychic constitution is also called into question. Within the tradition of technical progress that has, at all times, transformed conditions and developments – which, until then, had been taken as given, as unalterable fate – into influenceable, manipulable, and formable conditions and developments, the human body and its psyche are rapidly moving into the dimension of the Formable. The vision of “enhancing human performance” has been conjured up, above all, in the field of “converging technologies”.
If the wish to “improve humanity” – probably in view of the experience of the cultural and social deficits of “real human beings” – has been expressed often in history, the approach of bringing about these improvements by technical means – assuming that this is realizable – is apparently new (science fiction, as a rule, doesn’t lay any claim to future realizability). Formerly, improvement utopias were founded rather on the basis of “soft” methods – above all, upbringing and education.
Technology had its place outside of human beings, as a means of augmenting mankind’s capacities for action. Technical enhancement of human beings themselves – if this would be possible at all – would in any case pose a series of new ethical questions. Nanotechnology, in combination with biotechnology and medicine, opens perspectives for fundamentally altering and rebuilding the human body. At present, research is being done on tissue and organ substitution, which could be realized with the help of the nano- and stem cell technologies. Nanoimplants would be able to restore human sensory functions or to complement them, but they would also be able to influence the central nervous system. While the examples of medical applications of nanotechnology cited remain within a certain traditional framework – because the purpose consists of “healing” and “repairing” deviations from an ideal condition of health, which is a classical medical goal –, chances (or risks) of a remodelling and “improvement” of the human body are opened up. This could mean extending human physical capabilities, e.g., to new sensory functions (for example, broadening the electromagnetic spectrum the eye is able to perceive). It could, however, also – by means of the direct connection of mechanical systems with the human brain – give rise to completely new interfaces between man and machine, with completely unforeseeable consequences. Even completely technical organs and parts of the body (or even entire bodies) are being discussed, which, in comparison with biological organisms, are supposed to have advantages such as – perhaps – increased stability against external influences.
There are initial anthropological questions of our concept of humanity and of the relationship between humanity and technology. With them, however, and at the same time, the question poses itself, how far human beings can, should, or want to go in remodelling the human body, and to what end(s) this should or could be done. The difference between healing and enhancing interventions is on conceptual grounds – in particular the terms “health” and “illness” haven’t been clarified to date – and for practical reasons, a gradual approach would be prudent. The improvement of mankind could also include the postponement of death.
According to the definition of health formulated by the World Health Organization (WHO), according to which health is a “state of complete physical, mental, and social well-being, and not merely the absence of disease or infirmity” (Charter of the WHO), aging could also be interpreted as a disorder. Overcoming aging with the help of nanotechnology would, then, in the sense of medical ethics, be nothing other than fighting epidemics or other diseases. Aging “as a disease” would be combatted medically as if it were the flu. Seen against the background of continuing and controversial discussions on the concept of illness in medical theory and in medical ethics, this is, however, not undisputed. Whether aging as a process and death are, in principle, acknowledged as a predetermined initial condition of human existence, and should only be made subject to medical treatment in their extreme expression, or whether aging and death are seen as conditions which are, whenever possible, to be abolished, depends on fundamental normative presuppositions, which, in view of the conflicts connected with them, are of great ethical relevance.
Possible answers from ethics could, according to the respective school of thought, turn out quite differently. Liberal eugenics based on utilitarian ethics could draw the conclusion that it doesn’t acknowledge any difference between therapeutic and enhancing interventions and leave “the choice of purposes of interventions which alter characteristic traits to individual preferences”, Kantian ethics would thematize the instrumentalization of human beings, religious morals would bring traditional human self-concepts (which are deeply rooted in culture) as a limited being (in time as well) to bear.
The practical relevance of such ethical questions in view of a – at least in the eyes of some protagonists – possible technical improvement of human beings (with the substantial participation of nanotechnology) may, at first sight, seem limited. Two considerations, however, contest this estimation: first, the vision of the technical enhancement of human beings is actually being seriously advocated. Research projects are being planned in this direction, and milestones for reaching this goal are being set up, whereby nanotechnology takes on the role of an “enabling technology”. Furthermore, technical enhancements are by no means completely new, but are – in part – actually established, as the example of plastic surgery as a technical correction of physical characteristics felt to be imperfections shows, and as is the case in the practice of administering psycho-active substances. It isn’t difficult to predict that the possibilities and the realization of technical improvements of human beings will increase; demand is conceivable. In view of the moral questions connected with this development and of their conflict potential, ethical reflection is needed in this field.
Is an Independent Nano-Ethics Necessary?
Nonetheless, we don’t have to reckon with a “Nano-ethics” as a new branch of applied ethics. The propagation of nano-ethics overlooks the fact that many of the ethical questions raised by nanotechnology are already known from other contexts of ethical reflection. The ethics of technology, bioethics, the ethics of medicine or also the theoretical philosophy of technology concern themselves with questions of sustainability, of risk assessment, of the interface between human beings and technology, especially between living beings and technology. These questions are in themselves not new, as the analyses of the individual formulations of the problematic definitions have shown.
Partially new, however, is their convergence in nanotechnology. Analogous to the well-known fact that nanosciences and nanotechnology are fields in which the traditional borders between physics, chemistry, biology, and the engineering sciences are crossed, various traditional lines of ethical reflection also converge in ethical questions in nanotechnology. The fashionable creativity in coining terms, as it precipitates itself in designations like “Neurophilosophy” or “Nano-ethics”, obscures the integrative and cross-sectional nature of many ethical challenges, rather than being particularly helpful. We don’t need any new sub-discipline of applied ethics called “nano-ethics” but because new topics and questions are concentrated in nanotechnology and because it accelerates scientific and technical progress, there is a need for ethics in and for nanotechnology. Presupposition is, in particular, willingness on the part of the ethicists for an open reflection on ethical aspects of nanotechnology beyond the scope of classical “hyphenated ethics” and to a dialogue with natural and engineering scientists. Ethics as Concomitant Reflection of Nanotechnology Ethics often seems to lag seriously behind technical progress and to fall short of the occasionally great expectations. The rapid pace of innovation in technicization has the effect that ethical deliberations often come too late; after all of the relevant decisions have already been made, when it is far too late to influence the course of technology development. Technological and scientific progress shapes reality which can no longer be revised after certain points of no return.
Ethics in this perspective, could, at best, act as a repair service for problems which have already arisen.
The above is, however, a one-sided view. Ethics actually can provide orientation in the early phases of innovation, e.g., because even the scientific and technical basis includes certain risks which are ethically unacceptable. The opinion that one has to wait with ethical reflection until the corresponding products are on the market and have already caused problems can easily be refuted. This is because the technical knowledge and capabilities are known, as a rule, long before market entry, and can, with the reservation of the well-known problems of the uncertainty of predicting the future, be judged as to their consequences and normative implications. Due to nanotechnology’s early stage of development (many prefer to call it nanoscience), we have here a rare case of an advantageous opportunity: there is the chance and also the time for concomitant reflection, as well as the opportunity to integrate the results of reflection into the process of technology design, and thereby to contribute to the further development of nanotechnology.
These considerations don’t necessarily mean that ethical deliberations have to be made for absolutely every scientific or technical idea. The problems of a timely occupation with new technologies appear most vividly in the diverse questions raised by the visions of salvation and horror as regards nanotechnology. What sense is there in concerning oneself hypothetically with the ethical aspects of an extreme lengthening of the human life-span, or with self-replicating nanorobots? Most scientists are of the opinion that these are speculations which stem much rather from the realm of science fiction than from problem analysis which is to be taken seriously. We shouldn’t forget that ethical reflection binds resources, and there should therefore be a certain evidence for the realizability of these visions, if resources are to be invested in them which could then be lacking elsewhere. Ethical reflection is not necessary “in advance”, nor just for the sake of the intellectual diversion it provides. In this respect, we need our own “vision assessment”.
Ethical reflection and policy advice built on it are neither senseless nor premature in early phases of development if there are realistic possibilities for the practicability of the technology concerned. Then, there are possibilities for designing ethical assessments as concomitant processes of technological development. If, at first, only rather abstract considerations on the lines of technological development are possible, valuable advice for the further path of development can nonetheless already be given (e.g., by means of timely allusions to potential technology conflicts and to methods of de-escalation). Further, ethical judgement makes orientation for planning the process of technological development possible (for example, with regard to questions of equity). In the course of the continuing development of possibilities for the application of nanotechnology, it is then possible to continuously develop the – initially abstract – estimations and orientations on the basis of newly acquired knowledge and, finally, to carry out an ethically-based technology assessment: “Nanoethics is not something one can complete satisfactorily either first or last but something that needs to be continually updated.”
The added value in comparison with a later start for ethical reflection is obvious; even the process of technological development profits from the assessment of the consequences, and ethics avoids “coming too late”, which it is occasionally accused of doing.3 Technology assessment and ethics have the responsibility, in view of the rapid and momentous developments in nanotechnology, to make the societal process of learning which is always connected with the introduction of a new technology as constructive, transparent, and effective as possible by means of timely investigation and reflection.
BASIC ETHICS PRINCIPLES OF EXPERIMENTS ON ANIMALS
ethics principles in this sphere are expounded in «European convention about
defence of vertebrates which are used for experimental and other scientific
aims», accepted on September,
The generally accepted standard was become by principle three R: Refinement, that improvement, humanizing of handling animals during preparation and conducting of experiment; Reduction is reduction of amount of the used animals; Replacement is replacement of the high-organized animals.
Ethics requirements of commons to the use of vertebrates in medical and biological experiments are such:
1. Experiments on animals it admits only in those cases, if they are directed on the receipt of new scientific knowledges, improvements of health of man and animals, saving of living nature, is extremely necessary for high-quality studies and preparation of specialists,
2. Experiments on animals are justified then, when sufficient reasons to hope on the receipt of such results which substantially will be instrumental in achieving even one of the aims listed above are. It is impermissible to use animals in an experiment, if these aims can be attained another way.
3. It is needed to avoid literal duplication of the already conducted researches on animals, if it is not dictated the necessity of experimental verification of results.
4. Choice of animals, their amount, research method must be in detail to beginning of experiments and approved the authorized person or organ examination.
5. Animals for experiments must act from the certificated nursery.
6. During carrying out tests on animals it is needed to find out humanity, avoid dystresu, pains, not to inflict the protracted harm their health and to facilitate their suffering. It is necessary to aim maximally to abbreviate the amount of animals and use wherein it is possible, alternative methods which do not need participation of animals.
7. Experiments on animals a skilled researcher, what acquaintance, must conduct with the rules and adheres to them. The use of animals in an educational process is carried out under surveillance of specialist-teacher.
8. Laboratories, scientific and educational establishments, organizations in which carried out tests on animals, are subject attestation the organs authorized on it. In particular, their accordance is checked up the standards of «necessary laboratory practice» (GLP) which is an international requirement to development of medications.
How are these positions executed in Ukraine?
Experiments on animals are conducted in different establishments and organizations, above all things those, which are in a conduct NAN, AMN, UAAN, Department of education and science, Ministry of health protection.
For the sake of justice will mark that biology and medicine in Ukraine, as well as in other postsovetskykh states, have old traditions, one of which is humane attitude toward experimental vertebrates. However there is a row of problems of economic and organizational character, which need decision, if we want maximally to get around the European and international standards.
Unfortunately, terms of maintenance of animals in our vyvaryumakh even in the best times were far not ideal. It touches quality and amount of meal, placing of animals, technical equipment of vyvaryumyv, ventilation, illumination and etc In a country until now absent the production of standard forages is specialized for the different types of laboratory animals, and the genetic cleanness of lines, which here and there supported yet, causes the large doubting. The special laboratory breeds of pigs and dogs do not hatch. In many cases in experiments use vagrant cats and dogs.
It is impossible to delay with creation in Ukraine of modern nursery of the certificated laboratory animals. For lack of financing already a few years a project of such nursery is unrealized at Institute of pharmacology and toxicology of AMN of Ukraine, the experimental-reproductive bases of laboratory animals are not reconstructed in other establishments of researches.
Lately the special commission of the State pharmacological center of Ministry of health protection of Ukraine conducts verification and attestation of vyvaryumyv and laboratories, and gives them concrete recommendations from standards of GLP. To work with animals are spare large attention here. In all about 30 establishments of different department subordination are subject such verification.
On the special attention deserve alternative methods. To them, in particular, experiments belong on invertebrates of animals, research of in vitro on the cultures of mews, microorganisms. It is not enough a mathematical and komp'yuternoe design is yet used in researches. From other side, audyo- and video data is all wider used in a pedagogical process, and also models and operating models.
Lately due to efforts of committees and commissions at the presidiums of NAN and AMN of Ukraine, State pharmacological center of Ministry of health protection of Ukraine measures, called to provide regulation of experiments on animals in accordance with principles, are carried out, to accelerate adopting the necessary documents of recommendations and normative, stimulate introduction of alternative methods of research. In professional editions and mass medias information is given about principles and requirements. But it is needed, that they operated anywhere that animals are used in experiments.
This work must prepare adopting legislative acts, as, at first, Ukraine can not remain aside world motion, secondly, the accumulated practical experience will be instrumental in passing more accomplished acts and normative documents.
It is required attentively to learn a question about responsibility for violation of norms in the field of the use of experimental animals. A researcher and technical personnel must carry moral, disciplinary and legal responsibility for violation of these norms. The measure of responsibility depends on a potential or actual damage, inflicted biological safety of man, animals or environment. The intentional concealment of information about the possible negative consequences of such activity must be added conviction.
Information about the terms of maintenance and use of animals, and also about the performances of experimental jobs must be opened, except for those cases, when she can not be divulged in behalf of storage of state, patent, investigation or commercial secret. It is necessary, that access to this information was at leisure and for the public organizations regulations of which are foresee defence of animals and environment incorporated in Ukraine.
Human rights constitute a set of norms governing the treatment of individuals and groups by states and nonstate actors on the basis of ethical principles incorporated into national and international legal systems. Because the subject matter of the norms in question relate to the treatment of human beings, human rights overlap to a considerable degree with ethics, but they nevertheless should not be confused with ethics. Similarly, because human rights include the right to health and refer to essential social determinants of health and well-being of people, they overlap with many principles and norms of bioethics. Human rights and bioethics differ, however, in scope, sources, legal nature, and the mechanisms of monitoring and applying the norms.
The scope of bioethics is the ethical issues arising from healthcare and biomedical sciences, whereas that of human rights embraces the claims individuals and groups can legitimately make against states and nonstate actors to respect their dignity, integrity, autonomy, and freedom of action as defined in an officially endorsed set of standards or norms. Bioethics regulates clinical encounters with patients on the basis of principles; human rights, by contrast, are the special rules agreed upon in a given society to achieve justice and well-being.
The source of human rights is the norm-creating process of national and international legal systems, whereas that of bioethics is the deliberations and published opinions of leading thinkers, constituted review boards, and professional associations on the health-related ethical issues they address. Bioethics and human rights share an ethical concern for just behavior, built on empathy or altruism. The proximate formal source of human rights is typically an international human rights treaty or declaration while that of bioethics is a professional code or review board guidelines. The proximate source occasionally is identical, as when an instrument of international law directly addresses an issue of bioethics and human rights, for example, in the United Nations Educational, Scientific and Cultural Organization’s (UNESCO) Universal Declaration on the Human Genome and Human Rights or the Council of Europe’s Convention for the Protection of Human Rights and Dignity of the Human Being with Regard to the Application of Biology and Medicine, both of which were adopted in 1997.
The legal nature of human rights norms ranges from merely aspirational claims to justiciable and enforceable legally binding obligations. An important distinction is made between rights and human rights. In ethics a right refers to any entitlement, the moral validity or legitimacy of which depends on the mode of moral reasoning the ethicist is using. In law, a right is any legally protected interest. In human rights discourse, a human right is a higher-order right authoritatively defined using the expression human rights with the expectation that such a right carries a peremptory character and thus prevails over other (ordinary) rights. Another distinction is between the natural law and positive law foundations of human rights. The former refers to rights deriving from the natural order or divine origin, which are inalienable, immutable, and absolute, whereas in positive law rights are recognized through a political and legal process that results in a declaration, law, treaty, or other normative instrument. These may vary over time and be subject to derogations or limitations designed to optimize respect for human rights rather than impose an absolute standard. Human rights emerge from claims of people suffering injustice and thus are based on moral sentiment, culturally determined by contextualized moral and religious belief systems. They become part of the social order when an authoritative body proclaims them, and they attain a higher degree of universality based on the participation of virtually every nation in the norm-creating process, a process that is law-based but that reflects compromise and historical shifts. The International Bill of Human Rights (consisting of the Universal Declaration of Human Rights [UDHR] of 1948 and the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights, both of 1966), along with the other human rights treaties of the United Nations (UN) and of regional organizations, constitute the primary sources and reference points for what properly belongs in the category of human rights.
The methods of monitoring compliance with human rights include moral judgments made with reference to recognized human rights, quasi-judicial procedures of investigation and fact-finding leading to official pronouncements of political bodies, and enforceable judicial decisions. The parallel methods of bioethics focus more on codes of bioethics and official pronouncements of professional bodies that may result in altering research design or the behavior or liability of health professionals in their relations with patients or in policies affecting the health of populations. The overlap of human rights and bioethical discourse and the differences between the two become clearer as one clarifies the following: the emergence of human rights in political and legal discourse, the content of the right to health as defined in human rights instruments, the other human rights as they relate to health and well-being, and the role and means of promotion and protection of human rights
Support - one of major terms of correct mutual relations of doctor and patient. Support means aspiration of doctor to profit for patient in this case. However much it means that a doctor must undertake all responsibility for the state of health and mood of patient. His family and close friends must help here. However hidden basic resources are in most sick. Their complete opening and use becomes possible, if a patient realizes: a doctor aims to help, instead of tries to compel. Thus, a doctor is responsible for moral support of patient, that activates the role of him in a medical process.
Efficiency of platsebo in all his financial and psychological variants above all things depends on the desire of patient to get better and, finally, from his confidence in success. Understanding can be expressed and unverbal a way: by a look, by the nod of chairman and others like that. Tone and intonation is able to demonstrate both understanding and pushed aside, incuriosity. If a patient makes sure in ununderstanding and unwillingness to understand, he automatically transforms from the helper of doctor on his opponent. Non-fulfillment of medical recommendations (and as a result of it is absence of effect from treatment) can be an unique sign that a patient is not sure in the personal interest of doctor his concrete case, in the will to understand a situation and understand professionally doctor. At that rate the mutual relations of doctor and patient call at a deadlock.
Respect is foreseen by confession of value of patient as an individual and seriousness of his alarms. The question is not only about a consent to hear out a man, mainly - to show, that its words are ponderable for a doctor: it is necessary to acknowledge meaningfulness of events which took place in life of patient, and especially those which are of interest from point of doctor-professional. Already time, expended in finding out of the personal circumstances of life of patient, certifies respect to him of doctor. Often all, that is required from a doctor, is actively to show the personal interest. Important are simple things - for example, quickly to memorize the name and last name of patient. Unverbal intercourse is able both to fasten a trust to the doctor and prang him. If to look a patient in eyes and to sit next to him, he will feel that he is respected. Constantly to break a talk with to the patients or to conduct extraneous talks in his presence - means to show disrespect to him.Expediently is to praise a patient for patience, for skurpulezne implementation of your settings. If a patient showed you the results of the analyses, sciagrams and others like that, mark, as far as this information appeared useful, then by such method there will be a positive feed-back. One of the most dangerous and destructive habits of doctor is a capacity for humiliating in relation to the patients remarks. Patient which heard by chance, as a doctor mocks from him in the circle of friends, presumably, never will not forget it and will not forgive. A similar situation can arise up during collection of anamnesis, when a doctor passes remark concerning the inexact utterances (formulations) of patient constantly, accompanies the remark of person the proper dissatisfied mimicry, by "nervous" motions of hands. Sympathy is the key to the collaboration of doctor and patient. It is necessary to be able to put itself at place of patient, to look at the world by his eyes. Sympathy - it, to a certain extent, the original screen version (absorption) of senses other on the spiritual sphere. To sympathize - means to feel other all nature. Sympathy is begun with the fact of our presence, sometimes taciturn, from expectation, when a patient will begin to speak. A doctor must attentively listen a patient, even if he repeats oneself, to enable to discuss causes and effects of illness, his future. Regret can be expressed simply enough, laying a hand on the shoulder of patient, which creates a certain positively-emotional mood on which it is possible to build vzaymodoverye. However much such attitude toward a patient not nearly means undue "familiarity".
"Distance" (unnoticeable for patient) is certain between a doctor and patient must be always maintained, that on the certain sentinel segment of mutual relations (when a patient will wish to use good relationships with a doctor in the unrespectable aims) will guarantee a doctor saving of the authority and dignity and will create favourable terms for "retreat" and reliable "defensive position".
Types of Patients Encountered in a Clinical Medical Practice
There are obviously many types of patients that we encounter every day, many more than what could be listed here. Those outlined below are some of the more common ones that we see that may require a certain degree of art and finesse to provide the best experience for them and secondarily for us.
Categorizing people in this way runs the risk of suggesting that patients can easily be “pigeon-holed” into one category or another. Nothing could be further from the truth since each person is a unique blend of various entities much more complicated than what can be described by one characteristic. However, at certain times and in certain circumstances, certain types of behaviors come to the forefront that may require special consideration on your part in tempering and adjusting your response so that the end result is acceptable to all, hopefully.
Certain characteristics of patients that may be encountered in clinical medicine…
Pleasant - These folks are the easiest to care for as you would expect but there are still certain cautions to consider, like (a) getting too attached, (b)the desire to be too reassuring and optimistic when realism may dictate otherwise, (c) the wish to please the patient and honor requests that may not be in the best interests of his/her medical care.
Courageous - These are the patients we all deeply admire for their strength, fortitude, perseverance and acceptance in the face of tremendous adversity. We don't know where they get this from, but we all wish and hope we have it ourselves when we need it.
Angry - Not pleasant initially, but usually manageable if you know how. The difficulty is in controlling your instinctive reaction to be angry in return. If you handle it well, usually the anger is short-lived, and there will still be a happy ending.
Manipulative - Frequently more challenging than many of the others. Unfortunately, these patients have learned certain behaviors that have resulted in getting personal attention and obtaining the desired self-centered results. They usually do not regard themselves as manipulative and don’t respond well to attempts to change their behavior.
Demanding - Also referred to as “high-maintenance”. They request lots of extra attention, more so than their condition usually requires. They require much more time and energy than average. Once again, they are not usually aware of this, and should not be judged harshly.
Drug-seeking - Sometimes easy to spot, but sometimes not. The continual request for higher doses of narcotics or sedatives in a situation where the symptoms far outweigh the physical findings is often a red flag. Again, it is important not to be judgmental. These people need help too but not more narcotics.
Direct - Those who tell it like it is. If they don’t agree with what you’re doing, they will immediately let you know. They want to be in control and are quite upset if they are not or if they don’t perceive that they are. They are not necessarily angry; they are just very vocal about their disagreements.
All-knowing - May or may not have had a brief medical background, but whatever you are discussing with them, they seem to have some limited knowledge about it that leads them to believe that they know a lot about it. They bring in articles for you to read so that you can become as knowledgeable about their conditions as they are.
Noncompliant - Frequently tend to be frustrating to health professionals, mostly because they usually don’t do what you think is best for them. We sometimes wonder why they bother to seek our advice when they’re not going to follow through with it anyway.
Anxious - Usually require more reassurance than most. Sometimes, you may tend to be too reassuring to get someone to calm down only to find out later that s/he truly has a serious problem. Hypochondriasis, phobias and panic attacks are more extreme examples of anxiety and can be very challenging. Be very careful attributing patients' symptoms to anxiety, stress or depression just because all the tests are normal.
Psychosomatic - May be some of the most difficult diagnostic dilemmas that you will face. Their complaints seem very real, but a specific diagnosis cannot be found to explain the symptoms. The clinician is constantly asking himself if he’s missing something. Lots of resources are used in various testing procedures. Ultimately, when confronted with the diagnosis that the symptoms are psychosomatic, patients often become very upset and disbelieving.
Depressed - Very common in private practice and fairly easy to diagnose most of the time; but again, many patients are very resistant to this as a diagnosis for their symptoms, so it has to be approached cautiously. The biggest difficulty with this type of patient is in recognizing suicidal tendencies, which are sometimes denied by certain patients and not always easy to see in a 15-minute office visit.
Suffering patients are in great need of our services, which unfortunately are denied too often because of the unreasonable fear of addiction, especially in those with incurable or terminal diseases.
Chronic pain - Those with nonmalignant illnesses causing pain are even more of a challenge. There seems to be a fine, but very blurred, line between need and abuse, which is not always easy to see. Most doctors are afraid to see these people because of the fear of serious retribution if anything goes wrong. This is sad because many of them truly are needlessly miserable and nonfunctional.
Dying patients are as much in need of our care at this time in their lives as ever, probably even more so, although there is sometimes a tendency for us to withdraw thinking there's not much else we can do. This is the time when they probably need you the most.
Geriatric patients also need special attention and consideration. Many are disabled, have 3 or 4 chronic illnesses, increased rates of depression and anxiety, and require a significant amount of patience and compassion from their caregivers, physicians and family in order to get by.
Of course in clinical practice, we often see various combinations of the above characteristics within many of our patients. For example, you may encounter a very pleasant, polite person who is also demanding and manipulative at the same time, or you may see a person who is very anxious about his or her health but noncompliant when it comes to doing anything to improve the situation. This is part of what makes practicing medicine so interesting and challenging at the same time.
Taking care of some of the above patient types may seem like a daunting task if you are just starting a medical or nursing career. The vast majority of people are very nice and very grateful for your care and are not at all difficult. As long as you are caring, concerned about their welfare and attentive to their needs, they will remain happy and be a pleasure to serve.
The above characteristics are those encountered in all walks of life. When people become ill and stressed, these traits may become exacerbated. It may be helpful to consider your response to these situations so that you can be prepared for them and actually improve upon the experiences and the relationships that you develop in your practice. The more you can understand these interactions, the more effective you will be as a physician or other health professional.
Here two examples. A professor calls at the chamber of a seriously sick doctor with accompaniment - doctors-interns, manager of separation, students. Formal asking a patient about a feel, he in presence colleagues comes running to consideration of questions in relation to the remote prognosis of illness, possible complications and others like that. Imagine, what heartache such, save the mark, he inflicts deductions to the sick colleague? And does have already here some value his professional preparedness, when an elementary professional culture is absent? Can him become in the case of knowledge from medical ethics? Probably, it is not. Other (opposite) case: during an evening change a kid sister (junior nurse), simple rural woman, calls at this chamber, and already from a threshold together with the glad greeting it is marked declares that a patient today looks lot better. Here where internal culture, here where elixir of health! And knowledges do not need her from ethics, because at her natural, in truth Divine gift of ability to create people good. A doctor which reveals to the patient a necessary true must give a hope. To help a patient to look a true in eyes, but not to abandon hope on better - one of the largest and major orders. Reporting bad news is heavy. Main here determined, what part of true to say a patient for one visit. Usually a reaction and question of patients give to understand, what true they want to hear. She is given to hear in small doses, although in some occident try in course of time to reveal to the patients all true. Finally, will pay attention to such moment: to tell the truth considered usual for sick relatively doctor and vice versa. However much the major for a doctor is to say a true to itself, that to confess in failings and define the possibilities. Ability to set the limits possible and effectively distributing the forces is very important for a doctor. In clinical medicine all more frequent there are situations, when persuasions of doctor and patient do not coincide. Neprymyrymy of contradiction arise up, for example, when a doctor renounces to appoint treatment which requires sick, or when a patient categorically renounces to execute recommendations of doctor. A doctor owes complete right not to come running to the potentially threatening medical method, not execute a request to appoint drugs, halt at the instance of sick inspection even to establishment of clinical diagnosis. Doctors a long ago learned to get along with patients which not simply express a request, but also give out orders. If there is a choice, a patient must elect, but sometimes a doctor is forced categorically to say "no".
Emergence of Human Rights
The early formulation of the norms that are characterized today as human rights is inseparable from historical and philosophical manifestations of human striving for justice. Ultimately, human rights certainly derive from basic human instincts of survival of the species and behavior of empathy and altruism that evolutionary biology is only beginning to understand. Since human evolution is driven by reproductive selfishness, one could wonder why the human species would develop any ethical system, like that of human rights, according to which individuals manifest feeling for the suffering of others (empathy) and—even more surprising— act in self-sacrificing ways for the benefit of others without achieving any noticeable reproductive advantage. And yet, as Paul Ehrlich notes in Human Natures, “empathy and altruism often exist where the chances for any return for the altruist are nil” (p. 312). Natural selection does not provide the answer to moral behavior as “there aren’t enough genes to code the various required behaviors” but rather “cultural evolution is the source of ethics” and therefore of human rights.
Religion and law have an ambiguous role in this historical process. The history of religions is replete with advances in the moral principles of behavior—many of which directly influenced the drafting of human rights texts—but also in crimes committed in the name of a Supreme Being. Similarly, the emergence of the rule of law has been critical both to advancing justice and human rights against the arbitrary usurpation of power in most societies and to preserving the impunity of oppressors.
Scholars trace the current configuration of international human rights norms and procedures to the revolutions of freedom and equality that transformed governments across Europe and North America in the eighteenth century and that liberated subjugated people from slavery and colonial domination in the nineteenth and twentieth centuries.
Enlightenment philosophers derived the centrality of the individual from their theories of the state of nature. Social contractarians, especially the eighteenth-century French philosopher Jean-Jacques Rousseau, predicated the authority of the state on its capacity to achieve the optimum enjoyment of natural rights, that is, of rights inherent in each individual irrespective of birth or status. Rousseau wrote in A Discourse on the Origin of Inequality (1755) that “it is plainly contrary to the law of nature … that the privileged few should gorge themselves with superfluities, while the starving multitude are in want of the bare necessities of life”. Equally important was the concept of the universalized individual (“the rights of Man”), reflected in the political thinking of Immanuel Kant, John Locke, Thomas Paine, and the authors of the French and American declarations.
Much of this natural law tradition is secularized in contemporary human rights. World War II was the defining event for the internationalization of human rights, with the latter anticipated by Roosevelt’s “Four Freedoms” speech (1941), confirmed by the inclusion of human rights in the UN Charter (1945), and applied at the trial of Nazi doctors, leading to the Nuremberg Code (1946). In the war’s immediate aftermath, bedrock human rights texts were adopted: the Genocide Convention and the UDHR in 1948 and the Geneva Conventions in 1949, followed in 1966 by the two international covenants. Nongovernmental organizations (NGOs) played a role in all these developments and in subsequent drafting of treaties, as well as in the creation of investigative and accountability procedures at the intergovernmental level and at the national level. These processes were instrumental in bringing down South African apartheid, transforming East-Central Europe, and restoring democracy in Latin America. Human rights NGOs are now active on all continents.
The Normative Content of Human Rights:
The Right to Health
The current catalogue of human rights consists of some fifty normative propositions. They are enumerated in the international bill of human rights, extended by a score of specialized UN treaties, a half-dozen regional human rights treaties, and hundreds of international normative instruments in the fields of labor, refugees, armed conflict, and criminal law.
The meaning, scope, and practical significance of the right to health are particularly relevant for bioethics. The right to health as understood in international human rights law is defined in article 25 of the 1948 Universal Declaration of Human Rights (“Everyone has the right to a standard of living adequate for the health of himself and of his family, including food, clothing, housing and medical care and necessary social services.”) and in article 12 of the 1966 International Covenant on Economic, Social and Cultural Rights (ICESCR) (“the right of everyone to the enjoyment of the highest attainable standard of physical and mental health”). Variations on these definitions are found in most of the core UN and regional human rights treaties. In 2000 the Committee on Economic, Social and Cultural Rights (CESCR), which was created to monitor the ICESCR, analyzed the normative content of the right to health in terms of availability, accessibility, appropriateness, and quality of care and specified the duties of the state to respect, protect, and provide this right. The committee also listed fourteen human rights as “integral components of the right to health.” These related rights define to a large extent the determinants of health. The right to health does not mean the right to be healthy, because being healthy is determined only in part by healthcare; it is also determined by genetic predisposition and social factors. The field of social epidemiology has excelled at establishing correlations between discrimination based on race, class, or gender, denial of education and of decent working conditions, as well as other factors that contribute directly to increased rates of mortality and morbidity.
These social determinants may also be defined in human rights terms as deprivation of these health-related rights, which are among the most salient social factors that contribute to healthy lives. The summary below seeks to underscore the function of human rights as determinants of health by highlighting their normative content and their relation to health.
Health-Related Human Rights
Health is profoundly related to human rights both because human right violations have health impacts—such as those on torture survivors—and because human rights concern the dignity, integrity, autonomy of action, and conditions of social functioning of people. Some examples will be provided in each of these areas. Foremost among the human rights relating to physical and mental integrity is the right not to be arbitrarily deprived of life, which does not rule out death resulting from lawful acts of warfare or capital punishment, although international humanitarian law limits the former, and newer protocols and regional conventions, supported by UN resolutions and social movements, define the latter as a violation of human rights. Special treaties and procedures exist for prevention and repression of torture, disappearance, summary and extrajudicial execution, crimes against humanity, genocide, slavery, racial discrimination, and various forms of terrorism. Most of these are also dealt with in international humanitarian law, which was established to protect victims of armed conflict (injured and shipwrecked combatants, prisoners of war, and civilian populations notably under occupation) and codified in the four Geneva Conventions of 1949 and the Additional Protocols of 1977. The right to “a standard of living adequate for the health and well-being” of oneself and one’s family was defined in the UDHR as including “food, clothing, housing and medical care and necessary social services” as well as “the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond [one’s] control.” Subsequently, the rights to health, work, safe and healthy working conditions (occupational health), adequate food and protection from malnutrition and famine, adequate housing, and social security (that is, a regime covering long-term disability, old age, unemployment, and other conditions) have been further elaborated by the International Labour Organisation, the UN Commission on Human Rights, and the work of special rapporteurs and treaty bodies.
Dignity tends to be mentioned as both the basis for all human rights and a right per se. The great civil liberties — freedom of oral and written expression, freedom of conscience, opinion, religion, or belief—as well as freedom from arbitrary detention or arrest, rights to a fair hearing and an effective remedy for violations of human rights, and protection of privacy in domicile and correspondence, all support the autonomy of individuals to act without interference from the state or others. A separate but related human right is that of informed consent to medical experimentation, which was included in post-1945 enumerations of rights because of the extensive abuse of that right during World War II. Equality and nondiscrimination are human rights that are at the same time principles for the application of all other human rights, because they require that all persons be treated equally in the enjoyment of their human rights and that measures be taken to remove discriminatory practices on prohibited grounds. Freedom of movement means the right to reside where one pleases and to leave any country, including one’s own, and to return to one’s country. The right to seek and enjoy asylum from persecution is also a human right, which has been developed and expanded by international refugee law, the practice of the UN High Commissioner for Refugees, and recent codes relating to internally displaced persons. This right, like many others, is not absolute; limitations may be imposed, for example, in time of epidemic, as long as certain safeguards, defined in human rights law, are observed. Social well-being depends in large measure on group identity, education, family, culture, political and cultural participation, gender and reproductive rights, scientific activity, the environment, and development, all of which are the subject of specific human rights. The basic human rights texts affirm a limited number of group rights, notably the rights of peoples to self-determination, that is in the terms of the ICCPR and the ICESCR, to “determine their political status and freely pursue their economic, social and cultural development” and to permanent sovereignty over natural resources. They also enumerate the rights of persons belonging to minorities to practice their religion, enjoy their culture, and use their language. Indigenous peoples have defined rights that take into account their culture and special relation to the land
The right to education is defined in the ICESCR and by the CESCR, as well as specialized instruments of UNESCO. Other rights of the child have been codified in the 1989 Convention on the Rights of the Child. Political rights include the right to run for office and to vote in genuine and periodic elections. Cultural rights refer primarily to the right to participate in the cultural life of the community; the protection of writers, artists, and performers; and the preservation of cultural heritage.
Health issues loom large in human rights standardsetting and policy determination regarding gender and sexual and reproductive rights. The basic human rights texts have been supplemented by a specialized Convention on the Elimination of All Forms of Discrimination against Women (CEDAW) of 1979. Considerable advances in mainstreaming women’s rights as human rights were made at international conferences, a 1993 Declaration on Violence against Women, the work of a special rapporteur on this problem, and statements and programs on traditional practices harmful to health, such as female genital mutilation. Reproductive rights include the right of “men and women … to decide freely and responsibly on the number and spacing of their children” (CEDAW, article 16) and “to be informed and to have access to safe, effective, affordable and acceptable methods of family planning of their choice” (ICPD 1994). Various internationally approved programs and plans of action have set out in considerable detail the specific ways in which this right can be realized. Bioethical concerns overlap with human rights with respect to the right to enjoy the benefits of scientific progress and rights in scientific research. The former refers to the positive and equitable use of scientific advances, while the latter protect freedom to conduct research and disseminate results and the requirement of informed consent of human subjects. Occasionally, scholars refer to solidarity or thirdgeneration rights to certain global values such as peace, a healthy environment, development, communication, and humanitarian intervention or assistance. Two rights in this category have become more systematically developed and enshrined in authoritative texts: the rights to a healthy environment and to development.
The former has been recognized in many national constitutions and in the regional human rights texts. The latter has been recognized in numerous UN resolutions and specifically in a 1986 declaration, as well as in the African Charter on Human and Peoples’ Rights.
The 1986 Declaration on the Right to Development defines the right to development as “an inalienable human right by virtue of which every human person and all peoples are entitled to participate in, contribute to, and enjoy economic, social, cultural and political development, in which all human rights and fundamental freedoms can be fully realized.” Finally, article 28 of the UDHR proclaims the right of everyone to “a social and international order in which the rights and freedoms set forth in this Declaration can be fully realized.” This right is perhaps the broadest but also the most significant in making human rights the ordering criterion for national societies and international relations. The required social order suggests a democratic constitutional regime in which human rights of all categories are recognized in law and effectively observed in practice. It also suggests that international relations provide support for global efforts to further human rights and to establish means of accountability for persons and groups to obtain redress from countries that fail to fulfill their human rights obligations.
THE “POPULATION PROBLEM.”
Population policies usually begin with some notion of a problem. For strong advocates of fertility control, such as Paul Ehrlich and Anne Ehrlich (1990), the problem is captured in phrases such as “the population bomb” or “the population explosion.” According to others, particularly Julian Simon (1981), population growth brings many benefits to society, including the stimulation of human creativity. And for some, fertility, migration, and refugees are complex phenomena that must be carefully studied and that may produce no catchwords that draw public attention. Any definition of a population problem, or a statement that there is none, must be governed by the principle of truth telling. Those claiming a problem exists should indicate the good promoted or the evil created by fertility, migration, and refugees. What, precisely, has population done to make it qualify as a problem or a nonproblem? Statements of a problem should also give a fair summary of the evidence bearing on the subject and its limitations. If the findings are drawn from simulations, or cover a small sample of the countries in the world, those points should be disclosed. Scholars violate truth telling when they say or imply that simulations done through a hypothetical model of reality are equivalent to data on what people or organizations actually do. Further, when scholars who write on population work for or are funded by organizations promoting or trying to prevent action on population, such as the World Bank or a right-to-life committee, can it be determined whether they have remained objective or have taken on the advocacy role of their sponsors? If scholars have merged research and advocacy, do they indicate where research stops and advocacy begins? Truth telling requires that all relevant information be presented, even when it may harm one’s active endorsement of a policy. Claims that a problem exists must next show the specific connection between research evidence and the good or evil that makes it a problem. That connection often proves elusive. Data showing that the poorest nations of the world have the highest fertility and the wealthiest nations the lowest fertility may seem to establish a link between population growth and economic development. Indeed, such data are commonly used to support claims of a “population bomb.” Yet many studies have failed to show that rapid population growth holds back economic development in the industrialized or developing countries, and a few suggest that it may have advantages (Boserup; National Research Council, 1986). To meet the standard of truth telling, scholars should not, as often happens, cite only those studies that support the view of a population problem to which they subscribe and omit contrary evidence.
BIOMEDICAL RESEARCH (OR EXPERIMENTAL MEDICINE)
Biomedical research (or experimental medicine), in general simply known as medical research, is the basic research, applied research, or translational research conducted to aid and support the body of knowledge in the field of medicine. Medical research can be divided into two general categories: the evaluation of new treatments for both safety and efficacy in what are termed clinical trials, and all other research that contributes to the development of new treatments. The latter is termed preclinical research if its goal is specifically to elaborate knowledge for the development of new therapeutic strategies. A new paradigm to biomedical research is being termed translational research, which focuses on iterative feedback loops between the basic and clinical research domains to accelerate knowledge translation from the bedside to the bench, and back again. Medical research may involve doing research on public health, biochemistry, clinical research, microbiology, physiology, oncology, surgery and research on many other non-communicable diseases such as diabetes and cardiovascular diseases.
The increased longevity of humans over the past century can be significantly attributed to advances resulting from medical research. Among the major benefits have been vaccines for measles and polio, insulin treatment for diabetes, classes of antibiotics for treating a host of maladies, medication for high blood pressure, improved treatments for AIDS, statins and other treatments for atherosclerosis, new surgical techniques such as microsurgery, and increasingly successful treatments for cancer. New, beneficial tests and treatments are expected as a result of the Human Genome Project. Many challenges remain, however, including the appearance of antibiotic resistance and the obesity epidemic.
Most of the research in the field is pursued by biomedical scientists, however significant contributions are made by other biologists, as well as chemists and physicists. Medical research, done on humans, has to strictly follow the medical ethics as sanctioned in the Declaration of Helsinki and elsewhere. In all cases, the research ethics has to be respected.
Moral and Ethics principles of conducting of experiments on animals.
Animal testing, also known as animal experimentation, animal research, and in vivo testing, is the use of non-human animals in experiments, particularly model organisms such as nematode worms, fruit flies, zebrafish, and mice. Worldwide, it is estimated that 50 to 100 million vertebrate animals are used annually, along with a great many more invertebrates. Although the use of flies and worms as model organisms is very important, experiments on invertebrates are largely unregulated and not included in statistics. Most animals are euthanized after being used in an experiment. Sources of laboratory animals vary between countries and species; while most animals are purpose-bred, others may be caught in the wild or supplied by dealers who obtain them from auctions and pounds.
The research is conducted inside universities, medical schools, pharmaceutical companies, farms, defense establishments, and commercial facilities that provide animal-testing services to industry. It includes pure research such as genetics, developmental biology, behavioural studies, as well as applied research such as biomedical research, xenotransplantation, drug testing and toxicology tests, including cosmetics testing. Animals are also used for education, breeding, and defense research.
Supporters of the practice, such as the British Royal Society, argue that virtually every medical achievement in the 20th century relied on the use of animals in some way, with the Institute for Laboratory Animal Research of the U.S. National Academy of Sciences arguing that even sophisticated computers are unable to model interactions between molecules, cells, tissues, organs, organisms, and the environment, making animal research necessary in many areas. Some scientists and animal rights organizations, such as PETA and BUAV, question the legitimacy of it, arguing that it is cruel, poor scientific practice, poorly regulated, that medical progress is being held back by misleading animal models, that some of the tests are outdated, that it cannot reliably predict effects in humans, that the costs outweigh the benefits, or that animals have an intrinsic right not to be used for experimentation. The practice of animal testing is regulated to various extents in different countries.
The terms animal testing, animal experimentation, animal research, in vivo testing, and vivisection have similar denotations but different connotations. Literally, "vivisection" means the "cutting up" of a living animal, and historically referred only to experiments that involved the dissection of live animals. The term is occasionally used to refer pejoratively to any experiment using living animals; for example, the Encyclopaedia Britannica defines "vivisection" as: "Operation on a living animal for experimental rather than healing purposes; more broadly, all experimentation on live animals", although dictionaries point out that the broader definition is "used only by people who are opposed to such work". The word has a negative connotation, implying torture, suffering, and death. The word "vivisection" is preferred by those opposed to this research, whereas scientists typically use the term "animal experimentation".
An Experiment on a Bird in an Air Pump, from 1768, by Joseph Wright
The earliest references to animal testing are found in the writings of the Greeks in the second and fourth centuries BCE. Aristotle (Αριστοτέλης) (384-322 BCE) and Erasistratus (304-258 BCE) were among the first to perform experiments on living animals. Galen, a physician in second-century Rome, dissected pigs and goats, and is known as the "father of vivisection." Avenzoar, an Arabic physician in twelth-century Moorish Spain who also practiced dissection, introduced animal testing as an experimental method of testing surgical procedures before applying them to human patients.
Animals have been used throughout the
history of scientific research. In the 1880s, Louis
Pasteur convincingly demonstrated the germ theory of medicine by inducing anthrax in sheep.
In the 1890s, Ivan Pavlov famously used dogs to describe classical conditioning. Insulin was first
isolated from dogs in 1922, and revolutionized the treatment of diabetes. On
Claude Bernard, regarded as the "prince of vivisectors" argued that experiments on animals are "entirely conclusive for the toxicology and hygiene of man".
Toxicology testing became important in the 20th century. In the 19th century, laws regulating drugs were more relaxed. For example, in the U.S., the government could only ban a drug after a company had been prosecuted for selling products that harmed customers. However, in response to a tragedy in 1937 where a drug labeled “Elixir of Sulfanilamide” killed more than 100 people, the U.S. congress passed laws that required safety testing of drugs on animals before they could be marketed. Other countries enacted similar legislation. In the 1960s, in reaction to the Thalidomide tragedy, further laws were passed requiring safety testing on pregnant animals before a drug can be sold.
The controversy surrounding animal testing dates back to the 17th century. In 1655, the advocate of Galenic physiology Edmund O'Meara said that "the miserable torture of vivisection places the body in an unnatural state." O'Meara and others argued that animal physiology could be affected by pain during vivisection, rendering results unreliable. There were also objections on an ethical basis, contending that the benefit to humans did not justify the harm to animals. Early objections to animal testing also came from another angle — many people believed that animals were inferior to humans and so different that results from animals could not be applied to humans.
On the other side of the debate, those in favor of animal testing held that experiments on animals were necessary to advance medical and biological knowledge. Claude Bernard, known as the "prince of vivisectors" and the father of physiology — whose wife, Marie Françoise Martin, founded the first anti-vivisection society in France in 1883 — famously wrote in 1865 that "the science of life is a superb and dazzlingly lighted hall which may be reached only by passing through a long and ghastly kitchen". Arguing that "experiments on animals ... are entirely conclusive for the toxicology and hygiene of man...the effects of these substances are the same on man as on animals, save for differences in degree," Bernard established animal experimentation as part of the standard scientific method. In 1896, the physiologist and physician Dr. Walter B. Cannon said “The antivivisectionists are the second of the two types Theodore Roosevelt described when he said, ‘Common sense without conscience may lead to crime, but conscience without common sense may lead to folly, which is the handmaiden of crime.’ ” These divisions between pro- and anti- animal testing groups first came to public attention during the brown dog affair in the early 1900s, when hundreds of medical students clashed with anti-vivisectionists and police over a memorial to a vivisected dog.
In 1822, the first animal protection law was enacted in the British parliament, followed by the Cruelty to Animals Act (1876), the first law specifically aimed at regulating animal testing. The legislation was promoted by Charles Darwin, who wrote to Ray Lankester in March 1871: "You ask about my opinion on vivisection. I quite agree that it is justifiable for real investigations on physiology; but not for mere damnable and detestable curiosity. It is a subject which makes me sick with horror, so I will not say another word about it, else I shall not sleep to-night." Opposition to the use of animals in medical research first arose in the United States during the 1860s, when Henry Bergh founded the American Society for the Prevention of Cruelty to Animals (ASPCA), with America's first specifically anti-vivisection organization being the American AntiVivisection Society (AAVS), founded in 1883. Antivivisectionists of the era generally believed the spread of mercy was the great cause of civilization, and vivisection was cruel. However, in the USA the antivivisectionists' efforts were defeated in every legislature, overwhelmed by the superior organization and influence of the medical community. Overall, this movement had little legislative success until the passing of the Laboratory Animal Welfare Act, in 1966.
Types of vertebrates used in animal testing in Europe in 2005: a total of 12.1 million animals were used.
Accurate global figures for animal testing are difficult to obtain. The British Union for the Abolition of Vivisection (BUAV) estimates that 100 million vertebrates are experimented on around the world every year, 10–11 million of them in the European Union. The Nuffield Council on Bioethics reports that global annual estimates range from 50 to 100 million animals.
None of the figures, including those given in this article, include invertebrates, such as shrimp and fruit flies. Animals bred for research then killed as surplus, animals used for breeding purposes, and animals not yet weaned (which most laboratories do not count) are also not included in the figures.
Although many more invertebrates than vertebrates are used, these experiments are largely unregulated by law. The most used invertebrate species are Drosophila melanogaster, a fruit fly, and Caenorhabditis elegans, a nematode worm. In the case of C. elegans, the worm's body is completely transparent and the precise lineage of all the organism's cells is known, while studies in the fly D. melanogaster can use an amazing array of genetic tools.
Fruit flies are commonly used.
These animals offer great advantages over vertebrates, including their short life cycle and the ease with which large numbers may be studied, with thousands of flies or nematodes fitting into a single room. However, the lack of an adaptive immune system and their simple organs prevent worms from being used in medical research such as vaccine development. Similarly, flies are not widely used in applied medical research, as their immune system differs greatly from that of humans, and diseases in insects can be very different from diseases in vertebrates.
· Non-primate vertebrates
This rat is being deprived of restful REM sleep by a
researcher using a single platform ("flower pot") technique. The
water is within
Other rodents commonly used are guinea pigs, hamsters, and gerbils. Mice are the most commonly used vertebrate species because of their size, low cost, ease of handling, and fast reproduction rate. Mice are widely considered to be the best model of inherited human disease and share 99 % of their genes with humans. With the advent of genetic engineering technology, genetically modified mice can be generated to order and can provide models for a range of human diseases. Rats are also widely used for physiology, toxicology and cancer research, but genetic manipulation is much harder in rats than in mice, which limits the use of these rodents in basic science.
A white Wistar lab rat
Nearly 200,000 fish and 20,000 amphibians were used in the UK in 2004. Over 20,000 rabbits were used for animal testing in the UK in 2004. Albino rabbits are used in eye irritancy tests because rabbits have less tear flow than other animals, and the lack of eye pigment in albinos make the effects easier to visualize. Rabbits are also frequently used for the production of polyclonal antibodies.
Cats are most commonly used in neurological research. Dogs are widely used in biomedical research, testing, and education — particularly beagles, because they are gentle and easy to handle. They are commonly used as models for human diseases in cardiology, endocrinology, and bone and joint studies, research that tends to be highly invasive, according to the Humane Society of the United States.
Around 65,000 primates are used each year in the U.S. and Europe.
· Non-human primates
Animals used by laboratories are largely supplied by specialist dealers. Sources differ for vertebrate and invertebrate animals. Most laboratories breed and raise flies and worms themselves, using strains and mutants supplied from a few main stock centers. For vertebrates, sources include breeders who supply purpose-bred animals; businesses that trade in wild animals; and dealers who supply animals sourced from pounds, auctions, and newspaper ads. Animal shelters also supply the laboratories directly.
A laboratory mouse cage. Mice are either bred commercially, or raised in the laboratory.
Basic or pure research investigates how organisms behave, develop, and function. Those opposed to animal testing object that pure research may have little or no practical purpose, but researchers argue that it may produce unforeseen benefits, rendering the distinction between pure and applied research — research that has a specific practical aim — unclear.
Pure research uses larger numbers and a greater variety of animals than applied research. Fruit flies, nematode worms, mice and rats together account for the vast majority, though small numbers of other species are used, ranging from sea slugs through to armadillos.
Examples of the types of animals and experiments used in basic research include:
· Studies on embryogenesis and developmental biology. Mutants are created by adding transposons into their genomes, or specific genes are deleted by gene targeting. By studying the changes in development these changes produce, scientists aim to understand both how organisms normally develop, and what can go wrong in this process. These studies are particularly powerful since the basic controls of development, such as the homeobox genes, have similar functions in organisms as diverse as fruit flies and man.
· Experiments into behavior, to understand how organisms detect and interact with each other and their environment, in which fruit flies, worms, mice, and rats are all widely used. Studies of brain function, such as memory and social behavior, often use rats and birds. For some species, behavioral research is combined with enrichment strategies for animals in captivity because it allows them to engage in a wider range of activities.
· Breeding experiments to study evolution and genetics. Laboratory mice, flies, fish, and worms are inbred through many generations to create strains with defined characteristics. These provide animals of a known genetic background, an important tool for genetic analyses. Larger mammals are rarely bred specifically for such studies due to their slow rate of reproduction, though some scientists take advantage of inbred domesticated animals, such as dog or cattle breeds, for comparative purposes. Scientists studying how animals evolve use many animal species to see how variations in where and how an organism lives (their niche) produce adaptations in their physiology and morphology. As an example, sticklebacks are now being used to study how many and which types of mutations are selected to produce adaptations in animals' morphology during the evolution of new species.
Applied research aims to solve specific and practical problems. Compared to pure research, which is largely academic in origin, applied research is usually carried out in the pharmaceutical industry, or by universities in commercial partnerships. These may involve the use of animal models of diseases or conditions, which are often discovered or generated by pure research programmes. In turn, such applied studies may be an early stage in the drug discovery process. Examples include:
· Genetic modification of animals to study disease. Transgenic animals have specific genes inserted, modified or removed, to mimic specific conditions such as single gene disorders, such as Huntington's disease. Other models mimic complex, multifactorial diseases with genetic components, such as diabetes, or even transgenic mice that carry the same mutations that occur during the development of cancer. These models allow investigations on how and why the disease develops, as well as providing ways to develop and test new treatments. The vast majority of these transgenic models of human disease are lines of mice, the mammalian species in which genetic modification is most efficient. Smaller numbers of other animals are also used, including rats, pigs, sheep, fish, birds, and amphibians.
· Studies on models of naturally occurring disease and condition. Certain domestic and wild animals have a natural propensity or predisposition for certain conditions that are also found in humans. Cats are used as a model to develop immunodeficiency virus vaccines and to study leukemia because their natural predisposition to FIV and Feline leukemia virus. Certain breeds of dog suffer from narcolepsy making them the major model used to study the human condition. Armadillos and humans are among only a few animal species that naturally suffer from leprosy; as the bacteria responsible for this disease cannot yet be grown in culture, armadillos are the primary source of bacilli used in leprosy vaccines.
· Studies on induced animal models of human diseases. Here, an animal is treated so that it develops pathology and symptoms that resemble a human disease. Examples include restricting blood flow to the brain to induce stroke, or giving neurotoxins that cause damage similar to that seen in Parkinson's disease. Such studies can be difficult to interpret, and it is argued that they are not always comparable to human diseases. For example, although such models are now widely used to study Parkinson's disease, the British anti-vivisection interest group BUAV argues that these models only superficially resemble the disease symptoms, without the same time course or cellular pathology. In contrast, scientists assessing the usefulness of animal models of Parkinson's disease, as well as the medical research charity The Parkinson's Appeal, state that these models were invaluable and that they led to improved surgical treatments such as pallidotomy, new drug treatments such as levodopa, and later deep brain stimulation.
Toxicology testing, also known as safety testing, is conducted by pharmaceutical companies testing drugs, or by contract animal testing facilities, such as Huntingdon Life Sciences, on behalf of a wide variety of customers. According to 2005 EU figures, around one million animals are used every year in Europe in toxicology tests; which are about 10% of all procedures. According to Nature, 5,000 animals are used for each chemical being tested, with 12,000 needed to test pesticides. The tests are conducted without anesthesia, because interactions between drugs can affect how animals detoxify chemicals, and may interfere with the results.
A rabbit during a Draize test
Toxicology tests are used to examine finished products such as pesticides, medications, food additives, packing materials, and air freshener, or their chemical ingredients. Most tests involve testing ingredients rather than finished products, but according to BUAV, manufacturers believe these tests overestimate the toxic effects of substances; they therefore repeat the tests using their finished products to obtain a less toxic label.
The substances are applied to the skin or dripped into the eyes; injected intravenously, intramuscularly, or subcutaneously; inhaled either by placing a mask over the animals and restraining them, or by placing them in an inhalation chamber; or administered orally, through a tube into the stomach, or simply in the animal's food. Doses may be given once, repeated regularly for many months, or for the lifespan of the animal.
There are several different types of acute toxicity tests. The LD50 ("Lethal Dose 50 %") test is used to evaluate the toxicity of a substance by determining the dose required to kill 50 % of the test animal population. This test was removed from OECD international guidelines in 2002, replaced by methods such as the fixed dose procedure, which use fewer animals and cause less suffering. Nature writes that, as of 2005, "the LD50 acute toxicity test ... still accounts for one-third of all animal [toxicity] tests worldwide." Irritancy is usually measured using the Draize test, where a test substance is applied to an animal's eyes or skin, usually an albino rabbit. For Draize eye testing, the recommended protocol involves observing the effects of the substance at intervals and grading any damage or irritation, but that the test should be halted and the animal killed if it shows "continuing signs of severe pain or distress". The Humane Society of the United States writes that the procedure can cause redness, ulceration, hemorrhaging, cloudiness, or even blindness. This test has also been criticized by scientists for being cruel and inaccurate, subjective, over-sensitive, and failing to reflect human exposures in the real world. Although no accepted in vitro alternatives exist, a modified form of the Draize test called the low volume eye test may reduce suffering and provide more realistic results, but it has not yet replaced the original test.
The most stringent tests are reserved for drugs and foodstuffs. For these, a number of tests are performed, lasting less than a month (acute), one to three months (subchronic), and more than three months (chronic) to test general toxicity (damage to organs), eye and skin irritancy, mutagenicity, carcinogenicity, teratogenicity, and reproductive problems. The cost of the full complement of tests is several million dollars per substance and it may take three or four years to complete.
These toxicity tests provide, in the words of a 2006 United States National Academy of Sciences report, "critical information for assessing hazard and risk potential". However, as Nature reported, most animal tests either over- or underestimate risk, or do not reflect toxicity in humans particularly well, with false positive results being a particular problem. This variability stems from using the effects of high doses of chemicals in small numbers of laboratory animals to try to predict the effects of low doses in large numbers of humans. Although relationships do exist, opinion is divided on how to use data on one species to predict the exact level of risk in another.
Products in Europe not tested on animals carry this symbol.
Cosmetics testing on animals is particularly controversial. Such tests, which are still conducted in the U.S., involve general toxicity, eye and skin irritancy, phototoxicity (toxicity triggered by ultraviolet light) and mutagenicity.
Cosmetics testing is banned in the Netherlands, Belgium, and the UK, and in 2002, after 13 years of discussion, the European Union (EU) agreed to phase in a near-total ban on the sale of animal-tested cosmetics throughout the EU from 2009, and to ban all cosmetics-related animal testing. France, which is home to the world's largest cosmetics company, L'Oreal, has protested the proposed ban by lodging a case at the European Court of Justice in Luxembourg, asking that the ban be quashed. The ban is also opposed by the European Federation for Cosmetics Ingredients, which represents 70 companies in Switzerland, Belgium, France, Germany and Italy.
Beagles used for safety testing of pharmaceuticals in a British facility
Before the early 20th century, laws regulating drugs were lax. Currently, all new pharmaceuticals undergo rigorous animal testing before being licensed for human use. Tests on pharmaceutical products involve:
· metabolic tests, investigating pharmacokinetics - how drugs are absorbed, metabolized and excreted by the body when introduced orally, intravenously, intraperitoneally, intramuscularly, or transdermally.
· toxicology tests, which gauge acute, sub-acute, and chronic toxicity. Acute toxicity is studied by using a rising dose until signs of toxicity become apparent. Current European legislation demands that "acute toxicity tests must be carried out in two or more mammalian species" covering "at least two different routes of administration".Sub-acute toxicity is where the drug is given to the animals for four to six weeks in doses below the level at which it causes rapid poisoning, in order to discover if any toxic drug metabolites build up over time. Testing for chronic toxicity can last up to two years and, in the European Union, is required to involve two species of mammals, one of which must be non-rodent.
· Specific tests on reproductive function, embryonic toxicity, or carcinogenic potential can all be required by law, depending on the result of other studies and the type of drug being tested.
A technician assessing mice in a typical research vivarium
Animal experiments are widely used to develop new medicines and to test the safety of other products.
Many of these experiments cause pain to the animals involved or reduce their quality of life in other ways.
If it is morally wrong to cause animals to suffer then experimenting on animals produces serious moral problems.
Animal experimenters are very aware of this ethical problem and acknowledge that experiments should be made as humane as possible.
They also agree that it's wrong to use animals if alternative testing methods would produce equally valid results.
Two positions on animal experiments
· In favour of animal experiments:
· Experimenting on animals is acceptable if (and only if):
· suffering is minimised in all experiments
· human benefits are gained which could not be obtained by using other methods
· Against animal experiments:
· Experimenting on animals is always unacceptable because:
· it causes suffering to animals
· the benefits to human beings are not proven
· any benefits to human beings that animal testing does provide could be produced in other ways
Harm versus benefit
The case for animal experiments is that they will produce such great benefits for humanity that it is morally acceptable to harm a few animals.
The equivalent case against is that the level of suffering and the number of animals involved are both so high that the benefits to humanity don't provide moral justification.
The three Rs
The three Rs are a set of principles that scientists are encouraged to follow in order to reduce the impact of research on animals.
The three Rs are: Reduction, Refinement, Replacement.
· Reducing the number of animals used in experiments by:
· Improving experimental techniques
· Improving techniques of data analysis
· Sharing information with other researchers
· Refining the experiment or the way the animals are cared for so as to reduce their suffering by:
· Using less invasive techniques
· Better medical care
· Better living conditions
· Replacing experiments on animals with alternative techniques such as:
· Experimenting on cell cultures instead of whole animals
· Using computer models
· Studying human volunteers
· Using epidemiological studies
Animal experiments and drug safety
Scientists say that banning animal experiments would mean either
· an end to testing new drugs or
· using human beings for all safety tests
Animal experiments are not used to show that drugs are safe and effective in human beings - they cannot do that. Instead, they are used to help decide whether a particular drug should be tested on people.
Animal experiments eliminate some potential drugs as either ineffective or too dangerous to use on human beings. If a drug passes the animal test it's then tested on a small human group before large scale clinical trials.
The pharmacologist William D H Carey demonstrated the importance of animal testing in a letter to the British Medical Journal:
We have 4 possible new drugs to cure HIV. Drug A killed all the rats, mice and dogs. Drug B killed all the dogs and rats. Drug C killed all the mice and rats. Drug D was taken by all the animals up to huge doses with no ill effect. Question: Which of those drugs should we give to some healthy young human volunteers as the first dose to humans (all other things being equal)?
To the undecided (and non-prejudiced) the answer is, of course, obvious. It would also be obvious to a normal 12 year old child...
An alternative, acceptable answer would be, none of those drugs because even drug D could cause damage to humans. That is true, which is why Drug D would be given as a single, very small dose to human volunteers under tightly controlled and regulated conditions.(William DH Carey, BMJ 2002; 324: 236a)
Are animal experiments useful?
Animal experiments only benefit human beings if their results are valid and can be applied to human beings.
Not all scientists are convinced that these tests are valid and useful.
...animals have not been as critical to the advancement of medicine as is typically claimed by proponents of animal experimentation.
Moreover, a great deal of animal experimentation has been misleading and resulted in either withholding of drugs, sometimes for years, that were subsequently found to be highly beneficial to humans, or to the release and use of drugs that, though harmless to animals, have actually contributed to human suffering and death.(Jane Goodall 'Reason for Hope', 1999)
The moral status of the experimenters
Animal rights extremists often portray those who experiment on animals as being so cruel as to have forfeited any own moral standing.
But the argument is about whether the experiments are morally right or wrong. The general moral character of the experimenter is irrelevant.
What is relevant is the ethical approach of the experimenter to each experiment. John P Gluck has suggested that this is often lacking:
The lack of ethical self-examination is common and generally involves the denial or avoidance of animal suffering, resulting in the dehumanization of researchers and the ethical degradation of their research subjects.(John P. Gluck; Ethics and Behavior, Vol. 1, 1991)
Gluck offers this advice for people who may need to experiment on animals:
The use of animals in research should evolve out of a strong sense of ethical self-examination. Ethical self-examination involves a careful self-analysis of one's own personal and scientific motives. Moreover, it requires a recognition of animal suffering and a satisfactory working through of that suffering in terms of one's ethical values. (John P. Gluck; Ethics and Behavior, Vol. 1, 1991)
Animal experiments and animal rights
The issue of animal experiments is straightforward if we accept that animals have rights: if an experiment violates the rights of an animal, then it is morally wrong, because it is wrong to violate rights.
The possible benefits to humanity of performing the experiment are completely irrelevant to the morality of the case, because rights should never be violated (except in obvious cases like self-defence).
And as one philosopher has written, if this means that there are some things that humanity will never be able to learn, so be it.
This bleak result of deciding the morality of experimenting on animals on the basis of rights is probably why people always justify animal experiments on consequentialist grounds; by showing that the benefits to humanity justify the suffering of the animals involved.
Justifying animal experiments
Those in favour of animal experiments say that the good done to human beings outweighs the harm done to animals.
This is a consequentialist argument, because it looks at the consequences of the actions under consideration.
It can't be used to defend all forms of experimentation since there are some forms of suffering that are probably impossible to justify even if the benefits are exceptionally valuable to humanity.
Animal experiments and ethical arithmetic
The consequentialist justification of animal experimentation can be demonstrated by comparing the moral consequences of doing or not doing an experiment.
This process can't be used in a mathematical way to help people decide ethical questions in practice, but it does demonstrate the issues very clearly.
The basic arithmetic
If performing an experiment would cause more harm than not performing it, then it is ethically wrong to perform that experiment.
The harm that will result from not doing the experiment is the result of multiplying three things together:
· the moral value of a human being
· the number of human beings who would have benefited
· the value of the benefit that each human being won't get
The harm that the experiment will cause is the result of multiplying together:
· the moral value of an experimental animal
· the number of animals suffering in the experiment
· the negative value of the harm done to each animal
But it isn't that simple because:
· it's virtually impossible to assign a moral value to a being
· it's virtually impossible to assign a value to the harm done to each individual
· the harm that will be done by the experiment is known beforehand, but the benefit is unknown
· the harm done by the experiment is caused by an action, while the harm resulting from not doing it is caused by an omission
Certain versus potential harm
In the theoretical sum above, the harm the experiment will do to animals is weighed against the harm done to humans by not doing the experiment.
But these are two conceptually different things.
· The harm that will be done to the animals is certain to happen if the experiment is carried out
· The harm done to human beings by not doing the experiment is unknown because no-one knows how likely the experiment is to succeed or what benefits it might produce if it did succeed
So the equation is completely useless as a way of deciding whether it is ethically acceptable to perform an experiment, because until the experiment is carried out, no-one can know the value of the benefit that it produces.
And there's another factor missing from the equation, which is discussed in the next section.
Acts and omissions
The equation doesn't deal with the moral difference between acts and omissions.
Most ethicists think that we have a greater moral responsibility for the things we do than for the things we fail to do; i.e. that it is morally worse to do harm by doing something than to do harm by not doing something.
For example: we think that the person who deliberately drowns a child has done something much more wrong than the person who refuses to wade into a shallow pool to rescue a drowning child.
In the animal experiment context, if the experiment takes place, the experimenter will carry out actions that harm the animals involved.
If the experiment does not take place the experimenter will not do anything. This may cause harm to human beings because they won't benefit from a cure for their disease because the cure won't be developed.
So the acts and omissions argument could lead us to say that
· it is morally worse for the experimenter to harm the animals by experimenting on them
· than it is to (potentially) harm some human beings by not doing an experiment that might find a cure for their disease.
And so if we want to continue with the arithmetic that we started in the section above, we need to put an additional, and different, factor on each side of the equation to deal with the different moral values of acts and omissions.
One writer suggests that we can cut out a lot of philosophising about animal experiments by using this test:
...whenever experimenters claim that their experiments are important enough to justify the use of animals, we should ask them whether they would be prepared to use a brain-damaged human being at a similar mental level to the animals they are planning to use.
Peter Singer, Animal Liberation, Avon, 1991
Sadly, there are a number of examples where researchers have been prepared to experiment on human beings in ways that should not have been permitted on animals.
And another philosopher suggests that it would anyway be more effective to research on normal human beings:
Whatever benefits animal experimentation is thought to hold in store for us, those very same benefits could be obtained through experimenting on humans instead of animals. Indeed, given that problems exist because scientists must extrapolate from animal models to humans, one might think there are good scientific reasons for preferring human subjects.
Justifying Animal Experimentation: The Starting Point, in Why Animal Experimentation Matters: The Use of Animals in Medical Research, 2001
If those human subjects were normal and able to give free and informed consent to the experiment then this might not be morally objectionable.
Proposed EU directive
In November 2008 the European Union put forward proposals to revise the directive for the protection of animals used in scientific experiments in line with the three R principle of replacing, reducing and refining the use of animals in experiments. The proposals have three aims:
· to considerably improve the welfare of animals used in scientific procedures
· to ensure fair competition for industry
· to boost research activities in the European Union
The proposed directive covers all live non-human vertebrate animals intended for experiments plus certain other species likely to experience pain, and also animals specifically bred so that their organs or tissue can be used in scientific procedures.
The main changes proposed are:
· to make it compulsory to carry out ethical reviews and require that experiments where animals are used be subject to authorisation
· to widen the scope of the directive to include specific invertebrate species and foetuses in their last trimester of development and also larvae and other animals used in basic research, education and training to set minimum housing and care requirements
· to require that only animals of second or older generations be used, subject to transitional periods, to avoid taking animals from the wild and exhausting wild populations
· to state that alternatives to testing on animals must be used when available and that the number of animals used in projects be reduced to a minimum
· to require member states to improve the breeding, accommodation and care measures and methods used in procedures so as to eliminate or reduce to a minimum any possible pain, suffering, distress or lasting harm caused to animals
The proposal also introduces a ban on the use of great apes - chimpanzees, bonobos, gorillas and orangutans - in scientific procedures, other than in exceptional circumstances, but there is no proposal to phase out the use of other non-human primates in the immediate foreseeable future.
Scientists and governments state that animal testing should cause as little suffering to animals as possible, and that animal tests should only be performed where necessary. The "three Rs"are guiding principles for the use of animals in research in most countries:
1. Replacement refers to the preferred use of non-animal methods over animal methods whenever it is possible to achieve the same scientific aim.
2. Reduction refers to methods that enable researchers to obtain comparable levels of information from fewer animals, or to obtain more information from the same number of animals.
3. Refinement refers to methods that alleviate or minimize potential pain, suffering or distress, and enhance animal welfare for the animals still used.
Although such principles have been welcomed as a step forwards by some animal welfare groups, they have also been criticized as both outdated by current research, and of little practical effect in improving animal welfare.
ETHICS PROBLEMS OF CONDUCTING OF CLINICAL TRIALS
Clinical trials are conducted to allow safety and efficacy data to be collected for health interventions (e.g., drugs, devices, therapy protocols). These trials can only take place once satisfactory information has been gathered on the quality of the non-clinical safety, and Health Authority/Ethics Committee approval is granted in the country where the trial is taking place.
Depending on the type of product and the stage of its development, investigators enroll healthy volunteers and/or patients into small pilot studies initially, followed by larger scale studies in patients that often compare the new product with the currently prescribed treatment. As positive safety and efficacy data are gathered, the number of patients is typically increased. Clinical trials can vary in size from a single center in one country to multicenter trials in multiple countries.
Due to the sizable cost a full series of clinical trials may incur, the burden of paying for all the necessary people and services is usually borne by the sponsor who may be a governmental organization, a pharmaceutical, or biotechnology company. Since the diversity of roles may exceed resources of the sponsor, often a clinical trial is managed by an outsourced partner such as a contract research organization.
In planning a clinical trial, the sponsor or investigator first identifies the medication or device to be tested. Usually, one or more pilot experiments are conducted to gain insights for design of the clinical trial to follow. In medical jargon, effectiveness is how well a treatment works in practice and efficacy is how well it works in a clinical trial. In the U.S. the elderly comprise only 14% of the population but they consume over one-third of drugs. Despite this, they are often excluded from trials because their more frequent health issues and drug use produces more messy data. Women, children, and people with common medical conditions are also frequently excluded.
In coordination with a panel of expert investigators (usually physicians well-known for their publications and clinical experience), the sponsor decides what to compare the new agent with (one or more existing treatments or a placebo), and what kind of patients might benefit from the medication/device. If the sponsor cannot obtain enough patients with this specific disease or condition at one location, then investigators at other locations who can obtain the same kind of patients to receive the treatment would be recruited into the study.
During the clinical trial, the investigators: recruit patients with the predetermined characteristics, administer the treatment(s), and collect data on the patients' health for a defined time period. These data include measurements like vital signs, concentration of study drug in the blood, and whether the patient's health gets better or not. The researchers send the data to the trial sponsor who then analyzes the pooled data using statistical tests.
Some examples of what a clinical trial may be designed to do:
· assess the safety and effectiveness of a new medication or device on a specific kind of patient (e.g., patients who have been diagnosed with Alzheimer's disease)
· assess the safety and effectiveness of a different dose of a medication than is commonly used (e.g., 10 mg dose instead of 5 mg dose)
· assess the safety and effectiveness of an already marketed medication or device for a new indication, i.e. a disease for which the drug is not specifically approved
· assess whether the new medication or device is more effective for the patient's condition than the already used, standard medication or device ("the gold standard" or "standard therapy")
· compare the effectiveness in patients with a specific disease of two or more already approved or common interventions for that disease (e.g., Device A vs. Device B, Therapy A vs. Therapy B)
Note that while most clinical trials compare two medications or devices, some trials compare three or four medications, doses of medications, or devices against each other.
Except for very small trials limited to a single location, the clinical trial design and objectives are written into a document called a clinical trial protocol. The protocol is the 'operating manual' for the clinical trial, and ensures that researchers in different locations all perform the trial in the same way on patients with the same characteristics. (This uniformity is designed to allow the data to be pooled.) A protocol is always used in multicenter trials.
Because the clinical trial is designed to test hypotheses and rigorously monitor and assess what happens, clinical trials can be seen as the application of the scientific method to understanding human or animal biology.
The most commonly performed clinical trials evaluate new drugs, medical devices (like a new catheter), biologics, psychological therapies, or other interventions. Clinical trials may be required before the national regulatory authority approves marketing of the drug or device, or a new dose of the drug, for use on patients.
Beginning in the 1980s, harmonization of clinical trial protocols was shown as feasible across countries of the European Union. At the same time, coordination between Europe, Japan and the United States led to a joint regulatory-industry initiative on international harmonization named after 1990 as the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) Currently, most clinical trial programs follow ICH guidelines, aimed at "ensuring that good quality, safe and effective medicines are developed and registered in the most efficient and cost-effective manner. These activities are pursued in the interest of the consumer and public health, to prevent unnecessary duplication of clinical trials in humans and to minimize the use of animal testing without compromising the regulatory obligations of safety and effectiveness."
Clinical trials were first introduced in Avicenna's The Canon of Medicine in 1025 AD, in which he laid down rules for the experimental use and testing of drugs and wrote a precise guide for practical experimentation in the process of discovering and proving the effectiveness of medical drugs and substances. He laid out the following rules and principles for testing the effectiveness of new drugs and medications, which still form the basis of modern clinical trials:
1. "The drug must be free from any extraneous accidental quality."
2. "It must be used on a simple, not a composite, disease."
3. "The drug must be tested with two contrary types of diseases, because sometimes a drug cures one disease by its essential qualities and another by its accidental ones."
4. "The quality of the drug must correspond to the strength of the disease. For example, there are some drugs whose heat is less than the coldness of certain diseases, so that they would have no effect on them."
5. "The time of action must be observed, so that essence and accident are not confused."
6. "The effect of the drug must be seen to occur constantly or in many cases, for if this did not happen, it was an accidental effect."
7. "The experimentation must be done with the human body, for testing a drug on a lion or a horse might not prove anything about its effect on man."
One of the most famous clinical trials was James Lind's demonstration in 1747 that citrus fruits cure scurvy. He compared the effects of various different acidic substances, ranging from vinegar to cider, on groups of afflicted sailors, and found that the group who were given oranges and lemons had largely recovered from scurvy after 6 days.
Frederick Akbar Mahomed (d. 1884), who worked at Guy's Hospital in London, made substantial contributions to the process of clinical trials during his detailed clinical studies, where "he separated chronic nephritis with secondary hypertension from what we now term essential hypertension." He also founded "the Collective Investigation Record for the British Medical Association; this organization collected data from physicians practicing outside the hospital setting and was the precursor of modern collaborative clinical trials."
One way of classifying clinical trials is by the way the researchers behave.
· In an observational study, the investigators observe the subjects and measure their outcomes. The researchers do not actively manage the experiment. This is also called a natural experiment. An example is the Nurses' Health Study.
· In an interventional study, the investigators give the research subjects a particular medicine or other intervention. Usually, they compare the treated subjects to subjects who receive no treatment or standard treatment. Then the researchers measure how the subjects' health changes.
Another way of classifying trials is by their purpose. The U.S. National Institutes of Health (NIH) organizes trials into five (5) different types:
Prevention trials: look for better ways to prevent disease in people who have never had the disease or to prevent a disease from returning. These approaches may include medicines, vitamins, vaccines, minerals, or lifestyle changes.
· Screening trials: test the best way to detect certain diseases or health conditions.
· Diagnostic trials: conducted to find better tests or procedures for diagnosing a particular disease or condition.
· Treatment trials: test experimental treatments, new combinations of drugs, or new approaches to surgery or radiation therapy.
· Quality of life trials: explore ways to improve comfort and the quality of life for individuals with a chronic illness (a.k.a. Supportive Care trials).
· Compassionate use trials: provide experimental therapeutics prior to final FDA approval to patients whose options with other remedies have been unsuccessful. Usually, case by case approval must be granted by the FDA for such exceptions.
A fundamental distinction in evidence-based medicine is between observational studies and randomized controlled trials. Types of observational studies in epidemiology such as the cohort study and the case-control study provide less compelling evidence than the randomized controlled trial. In observational studies, the investigators only observe associations (correlations) between the treatments experienced by participants and their health status or diseases.
A randomized controlled trial is the study design that can provide the most compelling evidence that the study treatment causes the expected effect on human health.
· Randomized: Each study subject is randomly assigned to receive either the study treatment or a placebo.
· Blind: The subjects involved in the study do not know which study treatment they receive. If the study is double-blind, the researchers also do not know which treatment is being given to any given subject. This 'blinding' is to prevent biases, since if a physician knew which patient was getting the study treatment and which patient was getting the placebo, he/she might be tempted to give the (presumably helpful) study drug to a patient who could more easily benefit from it. In addition, a physician might give extra care to only the patients who receive the placebos to compensate for their ineffectiveness. A form of double-blind study called a "double-dummy" design allows additional insurance against bias or placebo effect. In this kind of study, all patients are given both placebo and active doses in alternating periods of time during the study.
· Placebo-controlled: The use of a placebo (fake treatment) allows the researchers to isolate the effect of the study treatment.
Of note, during the last ten years or so it has become a common practice to conduct "active comparator" studies (also known as "active control" trials). In other words, when a treatment exists that is clearly better than doing nothing for the subject (i.e. giving them the placebo), the alternate treatment would be a standard-of-care therapy. The study would compare the 'test' treatment to standard-of-care therapy.
Although the term "clinical trials" is most commonly associated with the large, randomized studies typical of Phase III, many clinical trials are small. They may be "sponsored" by single physicians or a small group of physicians, and are designed to test simple questions. In the field of rare diseases sometimes the number of patients might be the limiting factor for a clinical trial. Other clinical trials require large numbers of participants (who may be followed over long periods of time), and the trial sponsor is a private company, a government health agency, or an academic research body such as a university.
Clinical trial protocol
A clinical trial protocol is a document used to gain confirmation of the trial design by a panel of experts and adherence by all study investigators, even if conducted in various countries.
The protocol describes the scientific rationale, objective(s), design, methodology, statistical considerations, and organization of the planned trial. Details of the trial are also provided in other documents referenced in the protocol such as an Investigator's Brochure.
The protocol contains a precise study plan for executing the clinical trial, not only to assure safety and health of the trial subjects, but also to provide an exact template for trial conduct by investigators at multiple locations (in a "multicenter" trial) to perform the study in exactly the same way. This harmonization allows data to be combined collectively as though all investigators (referred to as "sites") were working closely together. The protocol also gives the study administrators (often a contract research organization) as well as the site team of physicians, nurses and clinic administrators a common reference document for site responsibilities during the trial.
The format and content of clinical trial protocols sponsored by pharmaceutical, biotechnology or medical device companies in the United States, European Union, or Japan has been standardized to follow Good Clinical Practice guidance issued by the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH). Regulatory authorities in Canada and Australia also follow ICH guidelines. Some journals, e.g. Trials, encourage trialists to publish their protocols in the journal.
An essential component of initiating a clinical trial is to recruit study subjects following procedures using a signed document called "informed consent."
Informed consent is a legally-defined process of a person being told about key facts involved in a clinical trial before deciding whether or not to participate. To fully describe participation to a candidate subject, the doctors and nurses involved in the trial explain the details of the study using terms the person will understand. Foreign language translation is provided if the participant's native language is not the same as the study protocol.
The research team provides an informed consent document that includes trial details, such as its purpose, duration, required procedures, risks, potential benefits and key contacts. The participant then decides whether or not to sign the document in agreement. Informed consent is not an immutable contract, as the participant can withdraw at any time without penalty.
In designing a clinical trial, a sponsor must decide on the target number of patients who will participate. The sponsor's goal usually is to obtain a statistically significant result showing a significant difference in outcome (e.g., number of deaths after 28 days in the study) between the groups of patients who receive the study treatments. The number of patients required to give a statistically significant result depends on the question the trial wants to answer. For example, to show the effectiveness of a new drug in a non-curable disease as metastatic kidney cancer requires many fewer patients than in a highly curable disease as seminoma if the drug is compared to a placebo.
The number of patients enrolled in a study has a large bearing on the ability of the study to reliably detect the size of the effect of the study intervention. This is described as the "power" of the trial. The larger the sample size or number of participants in the trial, the greater the statistical power.
However, in designing a clinical trial, this consideration must be balanced with the fact that more patients make for a more expensive trial. The power of a trial is not a single, unique value; it estimates the ability of a trial to detect a difference of a particular size (or larger) between the treated (tested drug/device) and control (placebo or standard treatment) groups. By example, a trial of a lipid-lowering drug versus placebo with 100 patients in each group might have a power of .90 to detect a difference between patients receiving study drug and patients receiving placebo of 10 mg/dL or more, but only have a power of .70 to detect a difference of 5 mg/dL.
Merely giving a treatment can have nonspecific effects, and these are controlled for by the inclusion of a placebo group. Subjects in the treatment and placebo groups are assigned randomly and blinded as to which group they belong. Since researchers can behave differently to subjects given treatments or placebos, trials are also doubled-blinded so that the researchers do not know to which group a subject is assigned.
Assigning a person to a placebo group can pose an ethical problem if it violates his or her right to receive the best available treatment. The Declaration of Helsinki provides guidelines on this issue.
Clinical trials involving new drugs are commonly classified into four phases. Each phase of the drug approval process is treated as a separate clinical trial. The drug-development process will normally proceed through all four phases over many years. If the drug successfully passes through Phases I, II, and III, it will usually be approved by the national regulatory authority for use in the general population. Phase IV are 'post-approval' studies.
Before pharmaceutical companies start clinical trials on a drug, they conduct extensive pre-clinical studies.
Pre-clinical studies involve in vitro (test tube) and in vivo (animal or cell culture) experiments using wide-ranging doses of the study drug to obtain preliminary efficacy, toxicity and pharmacokinetic information. Such tests assist pharmaceutical companies to decide whether a drug candidate has scientific merit for further development as an investigational new drug.
Phase 0 is a recent designation for exploratory, first-in-human trials conducted in accordance with the United States Food and Drug Administration's (FDA) 2006 Guidance on Exploratory Investigational New Drug (IND) Studies. Phase 0 trials are also known as human microdosing studies and are designed to speed up the development of promising drugs or imaging agents by establishing very early on whether the drug or agent behaves in human subjects as was expected from preclinical studies. Distinctive features of Phase 0 trials include the administration of single subtherapeutic doses of the study drug to a small number of subjects (10 to 15) to gather preliminary data on the agent's pharmacokinetics (how the body processes the drug) and pharmacodynamics (how the drug works in the body).
A Phase 0 study gives no data on safety or efficacy, being by definition a dose too low to cause any therapeutic effect. Drug development companies carry out Phase 0 studies to rank drug candidates in order to decide which has the best pharmacokinetic parameters in humans to take forward into further development. They enable go/no-go decisions to be based on relevant human models instead of relying on sometimes inconsistent animal data.
Questions have been raised by experts about whether Phase 0 trials are useful, ethically acceptable, feasible, speed up the drug development process or save money, and whether there is room for improvement.
Phase I trials are the first stage of testing in human subjects. Normally, a small (20-100) group of healthy volunteers will be selected. This phase includes trials designed to assess the safety (pharmacovigilance), tolerability, pharmacokinetics, and pharmacodynamics of a drug. These trials are often conducted in an inpatient clinic, where the subject can be observed by full-time staff. The subject who receives the drug is usually observed until several half-lives of the drug have passed. Phase I trials also normally include dose-ranging, also called dose escalation, studies so that the appropriate dose for therapeutic use can be found. The tested range of doses will usually be a fraction of the dose that causes harm in animal testing. Phase I trials most often include healthy volunteers. However, there are some circumstances when real patients are used, such as patients who have terminal cancer or HIV and lack other treatment options. Volunteers are paid an inconvenience fee for their time spent in the volunteer centre. Pay ranges from a small amount of money for a short period of residence, to a larger amount of up to approx $6000 depending on length of participation.
There are different kinds of Phase I trials:
Single Ascending Dose studies are those in which small groups of subjects are given a single dose of the drug while they are observed and tested for a period of time. If they do not exhibit any adverse side effects, and the pharmacokinetic data is roughly in line with predicted safe values, the dose is escalated, and a new group of subjects is then given a higher dose. This is continued until pre-calculated pharmacokinetic safety levels are reached, or intolerable side effects start showing up (at which point the drug is said to have reached the Maximum tolerated dose (MTD).
Multiple Ascending Dose studies are conducted to better understand the pharmacokinetics & pharmacodynamics of multiple doses of the drug. In these studies, a group of patients receives multiple low doses of the drug, while samples (of blood, and other fluids) are collected at various time points and analyzed to understand how the drug is processed within the body. The dose is subsequently escalated for further groups, up to a predetermined level.
A short trial designed to investigate any differences in absorption of the drug by the body, caused by eating before the drug is given. These studies are usually run as a crossover study, with volunteers being given two identical doses of the drug on different occasions; one while fasted, and one after being fed.
Once the initial safety of the study drug has been confirmed in Phase I trials, Phase II trials are performed on larger groups (20-300) and are designed to assess how well the drug works, as well as to continue Phase I safety assessments in a larger group of volunteers and patients. When the development process for a new drug fails, this usually occurs during Phase II trials when the drug is discovered not to work as planned, or to have toxic effects.
Phase II studies are sometimes divided into Phase IIA and Phase IIB.
· Phase IIA is specifically designed to assess dosing requirements (how much drug should be given).
· Phase IIB is specifically designed to study efficacy (how well the drug works at the prescribed dose(s)).
Some trials combine Phase I and Phase II, and test both efficacy and toxicity.
Some Phase II trials are designed as case series, demonstrating a drug's safety and activity in a selected group of patients. Other Phase II trials are designed as randomized clinical trials, where some patients receive the drug/device and others receive placebo/standard treatment. Randomized Phase II trials have far fewer patients than randomized Phase III trials.
Phase III studies are randomized controlled multicenter trials on large patient groups (300–3,000 or more depending upon the disease/medical condition studied) and are aimed at being the definitive assessment of how effective the drug is, in comparison with current 'gold standard' treatment. Because of their size and comparatively long duration, Phase III trials are the most expensive, time-consuming and difficult trials to design and run, especially in therapies for chronic medical conditions.
It is common practice that certain Phase III trials will continue while the regulatory submission is pending at the appropriate regulatory agency. This allows patients to continue to receive possibly lifesaving drugs until the drug can be obtained by purchase. Other reasons for performing trials at this stage include attempts by the sponsor at "label expansion" (to show the drug works for additional types of patients/diseases beyond the original use for which the drug was approved for marketing), to obtain additional safety data, or to support marketing claims for the drug. Studies in this phase are by some companies categorised as "Phase IIIB studies."
While not required in all cases, it is typically expected that there be at least two successful Phase III trials, demonstrating a drug's safety and efficacy, in order to obtain approval from the appropriate regulatory agencies such as FDA (USA), or the EMEA (European Union), for example.
Once a drug has proved satisfactory after Phase III trials, the trial results are usually combined into a large document containing a comprehensive description of the methods and results of human and animal studies, manufacturing procedures, formulation details, and shelf life. This collection of information makes up the "regulatory submission" that is provided for review to the appropriate regulatory authorities in different countries. They will review the submission, and, it is hoped, give the sponsor approval to market the drug.
Most drugs undergoing Phase III clinical trials can be marketed under FDA norms with proper recommendations and guidelines, but in case of any adverse effects being reported anywhere, the drugs need to be recalled immediately from the market. While most pharmaceutical companies refrain from this practice, it is not abnormal to see many drugs undergoing Phase III clinical trials in the market.
Phase IV trial is also known as Post Marketing Surveillance Trial. Phase IV trials involve the safety surveillance (pharmacovigilance) and ongoing technical support of a drug after it receives permission to be sold. Phase IV studies may be required by regulatory authorities or may be undertaken by the sponsoring company for competitive (finding a new market for the drug) or other reasons (for example, the drug may not have been tested for interactions with other drugs, or on certain population groups such as pregnant women, who are unlikely to subject themselves to trials). The safety surveillance is designed to detect any rare or long-term adverse effects over a much larger patient population and longer time period than was possible during the Phase I-III clinical trials. Harmful effects discovered by Phase IV trials may result in a drug being no longer sold, or restricted to certain uses: recent examples involve cerivastatin (brand names Baycol and Lipobay), troglitazone (Rezulin) and rofecoxib (Vioxx).
Clinical trials are only a small part of the research that goes into developing a new treatment. Potential drugs, for example, first have to be discovered, purified, characterized, and tested in labs (in cell and animal studies) before ever undergoing clinical trials. In all, about 1,000 potential drugs are tested before just one reaches the point of being tested in a clinical trial. For example, a new cancer drug has, on average, 6 years of research behind it before it even makes it to clinical trials. But the major holdup in making new cancer drugs available is the time it takes to complete clinical trials themselves. On average, about 8 years pass from the time a cancer drug enters clinical trials until it receives approval from regulatory agencies for sale to the public. Drugs for other diseases have similar timelines.
Some reasons a clinical trial might last several years:
· For chronic conditions like cancer, it takes months, if not years, to see if a cancer treatment has an effect on a patient.
· For drugs that are not expected to have a strong effect (meaning a large number of patients must be recruited to observe any effect), recruiting enough patients to test the drug's effectiveness (i.e., getting statistical power) can take several years.
· Only certain people who have the target disease condition are eligible to take part in each clinical trial. Researchers who treat these particular patients must participate in the trial. Then they must identify the desirable patients and obtain consent from them or their families to take part in the trial.
The biggest barrier to completing studies is the shortage of people who take part. All drug and many device trials target a subset of the population, meaning not everyone can participate. Some drug trials require patients to have unusual combinations of disease characteristics. It is a challenge to find the appropriate patients and obtain their consent, especially when they may receive no direct benefit (because they are not paid, the study drug is not yet proven to work, or the patient may receive a placebo). In the case of cancer patients, fewer than 5% of adults with cancer will participate in drug trials. According to the Pharmaceutical Research and Manufacturers of America (PhRMA), about 400 cancer medicines were being tested in clinical trials in 2005. Not all of these will prove to be useful, but those that are may be delayed in getting approved because the number of participants is so low.
Clinical trials that do not involve a new drug usually have a much shorter duration. (Exceptions are epidemiological studies like the Nurses' Health Study.)
Clinical trials designed by a local investigator and (in the U.S.) federally funded clinical trials are almost always administered by the researcher who designed the study and applied for the grant. Small-scale device studies may be administered by the sponsoring company. Phase III and Phase IV clinical trials of new drugs are usually administered by a contract research organization (CRO) hired by the sponsoring company. (The sponsor provides the drug and medical oversight.) A CRO is a company that is contracted to perform all the administrative work on a clinical trial. It recruits participating researchers, trains them, provides them with supplies, coordinates study administration and data collection, sets up meetings, monitors the sites for compliance with the clinical protocol, and ensures that the sponsor receives 'clean' data from every site. Recently, site management organizations have also been hired to coordinate with the CRO to ensure rapid IRB/IEC approval and faster site initiation and patient recruitment.
At a participating site, one or more research assistants (often nurses) do most of the work in conducting the clinical trial. The research assistant's job can include some or all of the following: providing the local Institutional Review Board (IRB) with the documentation necessary to obtain its permission to conduct the study, assisting with study start-up, identifying eligible patients, obtaining consent from them or their families, administering study treatment(s), collecting and statistically analyzing data, maintaining and updating data files during followup, and communicating with the IRB, as well as the sponsor (if any) and CRO (if any).
Clinical trials are closely supervised by appropriate regulatory authorities. All studies that involve a medical or therapeutic intervention on patients must be approved by a supervising ethics committee before permission is granted to run the trial. The local ethics committee has discretion on how it will supervise noninterventional studies (observational studies or those using already collected data). In the U.S., this body is called the Institutional Review Board (IRB). Most IRBs are located at the local investigator's hospital or institution, but some sponsors allow the use of a central (independent/for profit) IRB for investigators who work at smaller institutions.
To be ethical, researchers must obtain the full and informed consent of participating human subjects. (One of the IRB's main functions is ensuring that potential patients are adequately informed about the clinical trial.) If the patient is unable to consent for him/herself, researchers can seek consent from the patient's legally authorized representative. In California, the state has prioritized the individuals who can serve as the legally authorized representative.
In some U.S. locations, the local IRB must certify researchers and their staff before they can conduct clinical trials. They must understand the federal patient privacy (HIPAA) law and good clinical practice. International Conference of Harmonisation Guidelines for Good Clinical Practice (ICH GCP) is a set of standards used internationally for the conduct of clinical trials. The guidelines aim to ensure that the "rights, safety and well being of trial subjects are protected".
The notion of informed consent of participating human subjects exists in many countries all over the world, but its precise definition may still vary.
Informed consent is clearly a necessary condition for ethical conduct but does not ensure ethical conduct. The final objective is to serve the community of patients or future patients in a best-possible and most responsible way. However, it may be hard to turn this objective into a well-defined quantified objective function. In some cases this can be done, however, as for instance for questions of when to stop sequential treatments (see Odds algorithm), and then quantified methods may play an important role.
Responsibility for the safety of the subjects in a clinical trial is shared between the sponsor, the local site investigators (if different from the sponsor), the various IRBs that supervise the study, and (in some cases, if the study involves a marketable drug or device) the regulatory agency for the country where the drug or device will be sold.
Local site investigators
· A physician's first duty is to his/her patients, and if a physician investigator believes that the study treatment may be harming subjects in the study, the investigator can stop participating at any time. On the other hand, investigators often have a financial interest in recruiting subjects, and can act unethically in order to obtain and maintain their participation.
· The local investigators are responsible for conducting the study according to the study protocol, and supervising the study staff throughout the duration of the study.
· The local investigator or his/her study staff are responsible for ensuring that potential subjects in the study understand the risks and potential benefits of participating in the study; in other words, that they (or their legally authorized representatives) give truly informed consent.
· The local investigators are responsible for reviewing all adverse event reports sent by the sponsor. (These adverse event reports contain the opinion of both the investigator at the site where the adverse event occurred, and the sponsor, regarding the relationship of the adverse event to the study treatments). The local investigators are responsible for making an independent judgment of these reports, and promptly informing the local IRB of all serious and study-treatment-related adverse events.
· When a local investigator is the sponsor, there may not be formal adverse event reports, but study staff at all locations are responsible for informing the coordinating investigator of anything unexpected.
· The local investigator is responsible for being truthful to the local IRB in all communications relating to the study.
Approval by an IRB, or ethics board, is necessary before all but the most informal medical research can begin.
· In commercial clinical trials, the study protocol is not approved by an IRB before the sponsor recruits sites to conduct the trial. However, the study protocol and procedures have been tailored to fit generic IRB submission requirements. In this case, and where there is no independent sponsor, each local site investigator submits the study protocol, the consent(s), the data collection forms, and supporting documentation to the local IRB. Universities and most hospitals have in-house IRBs. Other researchers (such as in walk-in clinics) use independent IRBs.
· The IRB scrutinizes the study for both medical safety and protection of the patients involved in the study, before it allows the researcher to begin the study. It may require changes in study procedures or in the explanations given to the patient. A required yearly "continuing review" report from the investigator updates the IRB on the progress of the study and any new safety information related to the study.
· If a clinical trial concerns a new regulated drug or medical device (or an existing drug for a new purpose), the appropriate regulatory agency for each country where the sponsor wishes to sell the drug or device is supposed to review all study data before allowing the drug/device to proceed to the next phase, or to be marketed. However, if the sponsor withholds negative data, or misrepresents data it has acquired from clinical trials, the regulatory agency may make the wrong decision.
· In the U.S., the FDA can audit the files of local site investigators after they have finished participating in a study, to see if they were correctly following study procedures. This audit may be random, or for cause (because the investigator is suspected of fraudulent data). Avoiding an audit is an incentive for investigators to follow study procedures.
Different countries have different regulatory requirements and enforcement abilities. "An estimated 40 percent of all clinical trials now take place in Asia, Eastern Europe, central and south America. “There is no compulsory registration system for clinical trials in these countries and many do not follow European directives in their operations”, says Dr. Jacob Sijtsma of the Netherlands-based WEMOS, an advocacy health organisation tracking clinical trials in developing countries."
Phase 0 and Phase I drug trials seek healthy volunteers. Most other clinical trials seek patients who have a specific disease or medical condition.
Steps for Volunteers
Before participating in a clinical trial, interested volunteers should speak with their doctors, family members, and others who have participated in trials in the past. After locating a trial, volunteers will often have the opportunity to speak or e-mail the clinical trial coordinator for more information and to answer any questions. After receiving consent from their doctors, volunteers then arrange an appointment for a screening visit with the trial coordinator. 
All volunteers being considered for a trial are required to undertake a medical screen. There are different requirements for different trials, but typically volunteers will have the following tests:
· Measurment of the electrical activity of the heart (ECG)
· Measurment of blood pressure, heart rate and temperature
· Blood sampling
· Urine sampling
· Weight and height measurement
· Drugs abuse testing
· Pregnancy testing (females only)
GUIDELINES ON TRANSGENIC ANIMALS (1997)
The Canadian Council on Animal Care (CCAC) is responsible for the oversight of animals used in research, teaching and testing. In addition to the Guide to the Care and Use of Experimental Animals, Vols. 1 and 2, which lay down general principles for the care and use of the animals, the CCAC also publishes guidelines on issues of current and emerging concerns. The CCAC guidelines on: transgenic animals is the second of this series and has been produced by the scientific subcommittee of the CCAC. The creation and use of genetically-modified animals is a rapidly evolving field of research, therefore, these guidelines will be subjected to regular review.
The following guidelines for transgenic animals are provided: to assist Animal Care Committee (ACC) members and investigators in evaluating the ethical and technological aspects of the proposed creation, care and use of transgenic animals; to ensure that transgenic animals are used in accordance with the CCAC statement Ethics of Animal Investigation; and to ensure that the well-being of Canadians and the environment are protected.
By definition, the term “transgenic animal” refers to an animal in which there has been a deliberate modification of the genome – the material responsible for inherited characteristics – in contrast to spontaneous mutation (FELASA, September 1992, revised February 1995). Since 1981, when the term “transgenic” was first introduced by J.W. Gordon and F.H. Ruddle, genetically- engineered animals have become increasingly important as research subjects.
Transgenic animals are used: in the basic biological study of regulatory gene elements; in medical research, to identify the functions of specific factors in complex homeostatic systems through over-or under-expression, as models of human disease; in toxicology as responsive test animals; in biotechnology as producers of specific proteins; and in agriculture and aquaculture to improve yields of meat and other animal products. This list is not inclusive; the use of transgenic animals is likely to expand in the future.
There are three main methods used for the production of transgenic animals: DNA microinjection; retrovirus-mediated gene transfer; and embryonic stem (ES) cell-mediated gene transfer. DNA microinjection is the first method that was developed and provides the underlying concept for the other two methods. The introduced DNA may lead to the over or uder-expression of certain genes or to the expression of novel genes. The integration of the introduced gene into the host DNA, which is accomplished by the microinjection of DNA into the pronucleus of a fertilized ovum, is a random process and the introduced gene will not necessarily insert itself into a site that will permit its expression. Therefore, other methods have been devised, including vector-mediated gene transfer and homologous recombination, to increase the probability of expression. Retro-viruses are commonly used as vectors to transfer genetic material into the cell. The third method uses homologous recombination of DNA to permit precise targeting of DNA sites in embryonic stem cells. If the homologous sequence to be introduced into the cell, carries a mutation or a gene from another species, the new sequence will replace the specific targeted gene. This procedure is the method of choice for gene inactivation, the so-called “knock-out” method and is of particular importance for the study of the genetic control of development processes.
Transgenic animals provide the investigator with an extremely powerful tool for the development of disease models, since the mechanisms of gene regulation will receive a greater understanding. In addition, the use of transgenic mouse models which more closely mimic the human disease can replace the need to use more sentient animals as models. The better specificity of models may in time also lead to a reduction in the number of animals used. Genetic modification of livestock may also be seen as a benefit to human health, in the economic and efficient production of important pharmaceutical proteins.
In parallel with the development of transgenic technology, ethical concerns have arisen about the use of this technology. These concerns are wide ranging and encompass animal welfare, human health and environmental issues. They include animal suffering caused by the expression of transgenes inducing tumors or neurodegenerative diseases, etc., the possible escape of transgenic animals into the environment, not to mention the possibility for the modification of the human genome.
The production and use of transgenic animals are subject to all of the considerations raised by the CCAC guidelines on: animal use protocol review. Protocols must, therefore, be reviewed in the same manner. However, a close look must be given to the procedures involved and in particular to possible welfare concerns for the progeny from transgenic animal creation protocols. For these reasons the guidelines also require a Transgenic Information Sheet (Appendix) to be completed with the protocol submission.
In implementing these CCAC guidelines, ACCs and investigators considering the welfare of the animals in the proposed study will have to take into account the special features of each transgenic strain. In addition, they will have to be sensitive to ethical concerns and alert to technological changes in this rapidly evolving field. The CCAC anticipates that modifications to the guidelines will be required as this evolution occurs.
Investigator and Animal Care Committee Responsibilities
It is the responsibility of the ACC to ensure that all its members are informed about the ethical and technological aspects of transgenic animal use. A suggested reading list is attached.
It is also recommended that researchers applying for ACC approval to create or use transgenic animals be conversant with ethical concerns surrounding the use of these animals, and be prepared to justify their work as being in the public interest.
(b) Proposals to create new transgenic strains
(i) Standard procedures for creating transgenic animals can be dealt with by ACCs according to their usual practices for surgical procedures.
(ii) In reviewing applications for creation of novel transgenic animals, ACCs should determine that:
• the investigator has competent technical assistance and experience in the necessary record-keeping for breeding colony maintenance;
• arrangements for surgical procedures, colony housing and maintenance, have been discussed with and approved by the local Animal Facility Management.
• the investigator and the technical staff involved in daily monitoring of the transgenic colony are familiar with signs of distress in the species of study;
• a frequent, reliable, thorough, and documented monitoring system is in place to detect behavioral, anatomical and physiological abnormalities indicative of animal distress; and
• endpoints for survival are clearly defined.
Standard operating procedures (SPOs) can be developed to deal with these concerns.
(iii) Proposals to create or use transgenic animals should include information about expected phenotype (as indicated in the Appendix), to include information about anticipated pain or distress levels in the transgenic animal, measures which will be taken to alleviate such distress, and the required monitoring system.
(iv) Proposals to create novel transgenic initially should be assigned CCAC category of invasiveness level “D”. If approval is merited, it should be provisional, limited to a 12-month period, and subject to the requirement that the investigator report back to the ACC as soon as feasible on the animals’ phenotype, noting particularly any evidence of pain or distress.
After receiving the report from the investigator, the ACC may confirm approval of the proposal and adjust the level of invasiveness. However, if the animals are noted to be suffering unanticipated paid or distress, the ACC will ask the investigator to provide a revised protocol which will minimize and alleviate distress, and will reconsider its approval of the proposal.
(c) Proposals to utilize existing transgenic strains
(i) A proposal on transgenic animals may have two parts: creation of the transgenic animals, and subsequent experimental manipulations of the animals. Except where subsequent manipulations are restricted to observation and euthanasia of the transgenic animal, creation and use proposals should be considered as separate proposals.
(ii) In reviewing use proposals, ACCs should consider whether procedures regarded as acceptable in non-transgenic animals, are still acceptable in transgenic animals where altered phenotype may impose additional stresses.
(iii) Proposals to use existing transgenic strains should also include the information requested in the Appendix.
(i) Estimates of all animals to be used or generated in a transgenic study should be stated in proposals to the ACC, listed by use category (e.g., oocyte donors, pseudopregnant females, male “studs”, successful transgenics, etc).
(ii) When completing the Animal Use Data Form for reporting annual animal usage to CCAC, investigators should identify transgenic animals separately from non-transgenic animals in the “Species” column.
(iii) To reduce overall animal use, CCAC encourages, when appropriate, assignment of non-transgenic animals, bred in a transgenic creation procedure, to other ACC-approved protocols.
Asymptomatic heterozygotes must be clearly identified and should only be used for breeding purposes when the investigator is aware of their altered genotype. Accounting procedures within animal facilities must prevent double-counting of such transferred animals in annual use statistics.
(i) All proposals for creation or use of transgenic animals must assure the ACC that risks to human health and the environment are minimized to an acceptable level. For transgenic animals created using micro-injection or replication-defective viruses, the containment risks are limited to those associated with the escape of the animal and interbreeding with wild stocks.
Proposals should include information about:
• containment and security procedures in animal facilities and, if applicable, during transportation when importing the animal;
• plans for recapture should a breach of containment occur; and
• the consequences to human health or wild populations should containment fail.
(ii) For commonly-used transgenic species, each animal facility should have SOPs for containment, which can be referenced by proposals.
(ii) ACCs should discuss with the institutional Biohazard Committee any proposal which raises biohazard containment concerns.
(f) Other regulations
(i) ACC approval of a proposal does not relieve the investigator of responsibility to satisfy the regulations of any other governmental agencies. For example, creation of any transgenic fish strain requires approval of the Department of Fisheries and Oceans. Biohazard approval may also be required for some proposals.
Responsibilities of Council on Animal Care
To update at least every two years a reading list on ethical and technical aspects of transgenic animal use which can be distributed to members of ACCs, this list to include articles appropriate for all members.
(b) Accounting and reporting
To include in its annual usage statistics separate totals for transgenic strains of each species used in experiments.
(b) Accounting and reporting
To include in its annual usage statistics separate totals for transgenic strains of each species used in experiments.
COUNCIL ON ANIMAL CARE GUIDELINES ON ANIMAL WELFARE
This section outlines animal welfare concerns which are likely to arise in the creation of transgenic animals and their maintenance in the laboratory setting. Consideration will also have to be given to developments which involve species to be held outside the confines of the laboratory.
1. The Council on Animal Care Guidelines (CAC) on Transgenic Animals
In addition to the Guide to the Care and Use of Experimental Animals, the CCAC has recently developed guidelines on transgenic animals that will be subject to regular review due to the fast rate of evolution of this field. The intention behind the production of these guidelines is to assist the ACC members and investigators in evaluating the ethics of technological aspects of the proposed creation care and use of transgenic animals; to ensure that transgenic animals are used in accordance with the CCAC statement Ethics of Animal Investigation; and to ensure that the well-being of Canadian and the environment are protected.
ACC Protocol Review
The creation and use of transgenic animals are subjected to all of the considerations raised by the CCAC guidelines on animal use protocol.
Review: However, special consideration must be given to the procedures involved and in particular to possible welfare concerns for both the parent generation and the progeny. Evaluation of proposals for the creation of transgenic animals may be divided into two interrelated parts: first, the justification for creation of the particular transgenic animal; and secondly the welfare issues underlying the creation process itself. Special attention must be devoted to new protocols that use, for example, previously uncharacterized vectors or new transgenes, and/or are being performed by investigators who are new to the techniques.
As in all animal experimentation, justification for the use of the transgenic animal involves weighing the possible benefits of the experiment (e.g., advances in biomedical knowledge, the understanding and treatment of disease, improvements in production of foodstuffs or pharmaceuticals) versus the consideration of the ethical cost of the experiment in terms of the potential suffering of the animal. This is particularly difficult for novel transgenic animals as it is not possible to predict with absolute certainty what the effect of a novel transgenic manipulation will be on the animals. For this reason, the protocol must include a strategy to address unanticipated suffering and to establish endpoints for the termination of the experiment. For these reasons, the guidelines require a transgenic information sheet to be completed with the protocol submission. In addition, a separate protocol is required for the creation of a novel transgenic animal, and for its subsequent use.
Category D level of invasiveness must be assigned to each creation protocol, until the effects on the progeny are known. Any harmful effects observed must be reported to the ACC.
The review of transgenic animal use protocols must take into consideration the effects of the transgenic modification on the animal itself, in addition to the subsequent effects incurred by the procedures.
As with any other laboratory animals, transgenic animals must be accorded high standards of care and use. Therefore, all standard operating procedures for laboratory management, human health (investigators and technicians involved with transgenic animals), and animal welfare (as part of the creation protocol, during the subsequent development of the progeny and as part of the animal use protocol) have to be evaluated accordingly.
LABORATORY ANIMAL MANAGEMENT
The implication of transgenic techniques for animal welfare have been discussed by Moore and Mepham (1995). Specific welfare aspects have to be taken into consideration with transgenic animals. They include: the extent of discomfort experienced by the parents during the experimental procedures; the effect of the expression of the transgene (the modified gene inserted) on the created transgenic animal; and the effects on their progeny.
Physical and biological containment for transgenic animals should be adequate to assure the biosafety of the animal care staff which work with the animals, to prevent any possibility of the transfer of the gene within the non-transgenic colonies maintained in the same facilities, and to protect potentially immuno-compromised transgenic animals from pathogens. The purpose of any breeding operation is to preserve the traits of interest and the restrict causes of genetic variability. Some problems associated with the breeding of transgenic mice may arise. For example, during the creation of transgenic mice, contamination of the media used in collecting eggs and blastocysts for microinjection can occur.
Establishing and maintaining lines requires careful management. As part of the general process of transgenic animal creation, each animal used must be carefully identified. Cage cards and good records with details of breeding information are necessary to be able to identify with certainty the genetic characteristics and modifications of the animal. The data recorded should include the identity, breeding, pedigree and any other pertinent data such as any dates, observations or laboratory analysis information. Since transgenic animals are not easily replaceable, the cost of containment is an important factor in transgenic experimental design.
Embryo freezing is used for the preservation of transgenic strains. To protect colonies against disease, contamination or any other cause of loss, a large number of preimplanted embryos are kept, by cryopreservation. This also reduces the cost of maintaining a transgenic mouse line when it is not needed for experimentation.
With the development of transgenic animals, just a small genomic change can induce unpredictable and quite drastic changes at the level of the whole animal. This is the main challenge for transgenic animal management. It is therefore important to have a clear procedure for monitoring the animals and for dealing with unanticipated suffering.
THE NEED FOR ETHICAL REVIEW
THE NEED FOR ETHICAL REVIEW
The evaluation of animal and human welfare as it may be affected by biotechnology is a complete issue. One of the elements most notable in this process is the absence of an informed sense of the processes involved. ACCs share the responsibility for educating members on relevant aspects of animal care and use. Education concerning transgenic animal care and use with particular importance, involving the careful consideration of the reasons and manipulating the genome of any organism as genetic engineering is a most competitive social issue.
Bioethical review process is complicated by the fact that many techniques and developments in biotechnology are eligible for patent. The reluctance of biotechnologists to reveal proprietary information is understandable. Current work in fish culture, genetically-altered animals for organ replacement, and cloning underlines the fact that transgenic research is wide ranging. To be able to provide a competent review, ACC members need to develop similar broad understanding of the underlying principles.
A thorough discussion of biotechnology issues, including transgenic animals is needed, particularly to develop some consensus as to the relative value of benefits to be obtained from the use of transgenic animals. One of the more challenging questions is how to account for the interests of the animals involved.
The field of transgenic animal biotechnology is likely to become of increasing importance as the techniques develop further and are applied to many more animal species. Welfare and ethical concerns will also continue to evolve. Consequently, education together with thoughtful ethical decision making will remain the keystone of the review of transgenic protocols.
The CAC has made a commitment to review the guidelines on transgenic animals on a regular basis.
1. The Cambridge Textbook of Bioethics / Edited by Peter A. Singer and A. M. Viens. - Cambridge University Press, The Edinburgh Building, Cambridge CB2 8RU, UK, 2008. – 526p.
2. Encyclopedia of Bioethics, Third Edition in 5 vols / Stephen G. Post, Editor in Chief. - Macmillan Reference USA, 2004. – 3300p.
B - Additional:
1. Bioethics and Biosafety in biotechnology V.Sree Krishna 2007 New Age International (P) Ltd., Publishers – 135p.
2. Beauchamp TL, Childress JF. Principles of biomedical ethics. 5ª ed. New York: Oxford University Press; 2001.
3. Bioethics and Biosafety/ M K Sateesh. - I K International Pvt Ltd, 2008. - 820 p
4. Bioethics and public health law / David Orentlicher, Mary Anne Bobinski, Mark A. Hall. -Aspen Publishers, 15 февр. 2008 – 687p.
5. Professionalism in Health Care: A Primer for Career Success / Sherry Makely, Vanessa J. Austin, Quay Kester. - Pearson, 2012 – 238p.
6. Moor, J., Weckert, J. (2003) Nanoethics: Assessing the Nanoscale from an ethical point of view. Lecture, Technical University of Darmstadt, October 10, 2003.
7. Mnyusiwalla, A. et al. (2003) Mind the Gap. Science and Ethics in Nanotechnology. Nanotechnology 14: R9-R13; www.iop.org/EJ.
8. Altmann, J., Gubrud, A.A. (2002) Risks from military uses of nanotechnology – the need for Technology Assessment and Preventive Control. In: Rocco, M., Tomellini, R. (eds.): Nanotechnology – Revolutionary Opportunities and Societal Implications, European Commission, Luxembourg.