The National State Tax Service University of Ukraine



Nbsp;                          Тексти до Робочої навчальної програми                                    «Іноземна мова» (англійська) для підготовки бакалавра галузі знань 0501                       «Інформатика та обчислювальна техніка»               напряму підготовки: 6.050101 „Комп’ютерні науки”  

Електронний  ресурс )

 

                                              

                                                 

Science

 

This article is about the general term "Science", particularly as it refers to experimental sciences. For the specific topics of study of scientists, see natura science.

Science (from the Latin scientia, meaning "knowledge") is, in its broadest sense, any systematic knowledge that is capable of resulting in a correct prediction or reliable outcome. In this sense, science may refer to a highly skilled technique, technology, or practice.

In today's more restricted sense, science refers to a system of acquiring knowledge based on scientific method, and to the organized body of knowledge gained through such research. It is a "systematic enterprise of gathering knowledge about the world and organizing and condensing that knowledge into testable laws and theories". This article focuses upon science in this more restricted sense, sometimes called experimental science, and also gives some broader historical context leading up to the modern understanding of the word "science."

From the Middle Ages to the Enlightenment, "science" had more-or-less the same sort of very broad meaning in English that "philosophy" had at that time. By the early 1800s, "natural philosophy" (which eventually evolved into what is today called "natural science") had begun to separate from "philosophy" in general. In many cases, "science" continued to stand for reliable knowledge about any topic, in the same way it is still used in the broad sense in modern terms such as library science, political science, and computer science. In the more narrow sense of "science" today, as natural philosophy became linked to an expanding set of well-defined laws (beginning with Galileo's laws, Kepler's laws, and Newton's laws for motion), it became more common to refer to natural philosophy as "natural science". Over the course of the 1800s, the word "science" become increasingly associated mainly with the disciplined study of the natural world (that is, the non-human world). This sometimes left the study of human thought and society in a linguistic limbo, which has today been resolved by classifying these areas of study as the social sciences.

Basic classifications

Scientific fields are commonly divided into two major groups: natural sciences, which study natural phenomena (including biological life), and social sciences, which study human behavior and societies. These groupings are empirical sciences, which means the knowledge must be based on observable phenomena and capable of being tested for its validity by other researchers working under the same conditions. There are also related disciplines that are grouped into interdisciplinary and applied sciences, such as engineering and health science. Within these categories are specialized scientific fields that can include elements of other scientific disciplines but often possess their own terminology and body of expertise.

Mathematics, which is classified as a formal science, has both similarities and differences with the natural and social sciences. It is similar to empirical sciences in that it involves an objective, careful and systematic study of an area of knowledge; it is different because of its method of verifying its knowledge, using a priori rather than empirical methods. Formal science, which also includes statistics and logic, is vital to the empirical sciences. Major advances in formal science have often led to major advances in the empirical sciences. The formal sciences are essential in the formation of hypotheses, theories, and laws, both in discovering and describing how things work (natural sciences) and how people think and act (social sciences).

Applied science (i.e. engineering) is the practical application of scientific knowledge.

History and etymology

It is widely accepted that 'modern science' arose in the Europe of the 17th century (towards the end of the Renaissance), introducing a new understanding of the natural world. While empirical investigations of the natural world have been described since antiquity (for example, by Aristotle and Pliny the Elder), and scientific methods have been employed since the Middle Ages (for example, by Alhazen and Roger Bacon), the dawn of modern science is generally traced back to the early modern period during what is known as the Scientific Revolution of the 16th and 17th centuries.

The word "science" comes through the Old French, and is derived in turn from the Latin scientia, "knowledge", the nominal form of the verb scire, "to know". The Proto-Indo-European (PIE) root that yields scire is *skei-, meaning to "cut, separate, or discern". Similarly, the Greek word for science is 'επιστήμη', deriving from the verb 'επίσταμαι', 'to know'. From the Middle Ages to the Enlightenment, science or scientia meant any systematic recorded knowledge. Science therefore had the same sort of very broad meaning that philosophy had at that time. In other languages, including French, Spanish, Portuguese, and Italian, the word corresponding to science also carries this meaning.

Prior to the 1700s, the preferred term for the study of nature among English speakers was "natural philosophy", while other philosophical disciplines (e.g., logic, metaphysics, epistemology, ethics and aesthetics) were typically referred to as "moral philosophy". Today, "moral philosophy" is more-or-less synonymous with "ethics". Well into the 1700s, science and natural philosophy were not quite synonymous, but only became so later with the direct use of what would become known formally as the scientific method. By contrast, the word "science" in English was still used in the 17th century (1600s) to refer to the Aristotelian concept of knowledge which was secure enough to be used as a prescription for exactly how to accomplish a specific task. With respect to the transitional usage of the term "natural philosophy" in this period, the philosopher John Locke wrote disparagingly in 1690 that "natural philosophy is not capable of being made a science".

Locke's assertion notwithstanding, by the early 1800s natural philosophy had begun to separate from philosophy, though it often retained a very broad meaning. In many cases, science continued to stand for reliable knowledge about any topic, in the same way it is still used today in the broad sense (see the introduction to this article) in modern terms such as library science, political science, and computer science. In the more narrow sense of science, as natural philosophy became linked to an expanding set of well-defined laws (beginning with Galileo's laws, Kepler's laws, and Newton's laws for motion), it became more popular to refer to natural philosophy as natural science. Over the course of the nineteenth century, moreover, there was an increased tendency to associate science with study of the natural world (that is, the non-human world). This move sometimes left the study of human thought and society (what would come to be called social science) in a linguistic limbo by the end of the century and into the next.

Through the 1800s, many English speakers were increasingly differentiating science (i.e., the natural sciences) from all other forms of knowledge in a variety of ways. The now-familiar expression “scientific method,” which refers to the prescriptive part of how to make discoveries in natural philosophy, was almost unused until then, but became widespread after the 1870s, though there was rarely total agreement about just what it entailed. The word "scientist," meant to refer to a systematically working natural philosopher, (as opposed to an intuitive or empirically minded one) was coined in 1833 by William Whewell.Discussion of scientists as a special group of people who did science, even if their attributes were up for debate, grew in the last half of the 19th century. Whatever people actually meant by these terms at first, they ultimately depicted science, in the narrow sense of the habitual use of the scientific method and the knowledge derived from it, as something deeply distinguished from all other realms of human endeavor.

By the twentieth century (1900s), the modern notion of science as a special kind of knowledge about the world, practiced by a distinct group and pursued through a unique method, was essentially in place. It was used to give legitimacy to a variety of fields through such titles as "scientific" medicine, engineering, advertising, or motherhood. Over the 1900s, links between science and technology also grew increasingly strong. As Martin Rees explains, progress in scientific understanding and technology have been synergistic and vital to one another.

Richard Feynman described science in the following way for his students: "The principle of science, the definition, almost, is the following: The test of all knowledge is experiment. Experiment is the sole judge of scientific 'truth'. But what is the source of knowledge? Where do the laws that are to be tested come from? Experiment, itself, helps to produce these laws, in the sense that it gives us hints. But also needed is imagination to create from these hints the great generalizations — to guess at the wonderful, simple, but very strange patterns beneath them all, and then to experiment to check again whether we have made the right guess." Feynman also observed, "...there is an expanding frontier of ignorance...things must be learned only to be unlearned again or, more likely, to be corrected."

Scientific method

A scientific method seeks to explain the events of nature in a reproducible way, and to use these findings to make useful predictions. This is done partly through observation of natural phenomena, but also through experimentation that tries to simulate natural events under controlled conditions. Taken in its entirety, the scientific method allows for highly creative problem solving whilst minimizing any effects of subjective bias on the part of its users (namely the confirmation bias).

Basic and applied research

Although some scientific research is applied research into specific problems, a great deal of our understanding comes from the curiosity-driven undertaking of basic research. This leads to options for technological advance that were not planned or sometimes even imaginable. This point was made by Michael Faraday when, allegedly in response to the question "what is the use of basic research?" he responded "Sir, what is the use of a new-born child?". For example, research into the effects of red light on the human eye's rod cells did not seem to have any practical purpose; eventually, the discovery that our night vision is not troubled by red light would lead militaries to adopt red light in the cockpits of all jet fighters.

Experimentation and hypothesizing

DNA determines the genetic structure of all known life

The Bohr model of the atom, like many ideas in the history of science, was at first prompted by (and later partially disproved by) experimentation.

Based on observations of a phenomenon,scientists may generate a model. This is an attempt to describe or depict the phenomenon in terms of a logical physical or mathematical representation. As empirical evidence is gathered, scientists can suggest a hypothesis to explain the phenomenon. Hypotheses may be formulated using principles such as parsimony (traditionally known as "Occam's Razor") and are generally expected to seek consilience - fitting well with other accepted facts related to the phenomena. This new explanation is used to make falsifiable predictions that are testable by experiment or observation. When a hypothesis proves unsatisfactory, it is either modified or discarded. Experimentation is especially important in science to help establish a causational relationships (to avoid the correlation fallacy). Operationalization also plays an important role in coordinating research in/across different fields.

Once a hypothesis has survived testing, it may become adopted into the framework of a scientific theory. This is a logically reasoned, self-consistent model or framework for describing the behavior of certain natural phenomena. A theory typically describes the behavior of much broader sets of phenomena than a hypothesis; commonly, a large number of hypotheses can be logically bound together by a single theory. Thus a theory is a hypothesis explaining various other hypotheses. In that vein, theories are formulated according to most of the same scientific principles as hypotheses.

While performing experiments, scientists may have a preference for one outcome over another, and so it is important to ensure that science as a whole can eliminate this bias. This can be achieved by careful experimental design, transparency, and a thorough peer review process of the experimental results as well as any conclusions. After the results of an experiment are announced or published, it is normal practice for independent researchers to double-check how the research was performed, and to follow up by performing similar experiments to determine how dependable the results might be.

Certainty and science

Unlike a mathematical proof, a scientific theory is empirical, and is always open to falsification if new evidence is presented. That is, no theory is ever considered strictly certain as science works under a fallibilistic view. Instead, science is proud to make predictions with great probability, bearing in mind that the most likely event is not always what actually happens. During the Yom Kippur War, cognitive psychologist Daniel Kahneman was asked to explain why one squad of aircraft had returned safely, yet a second squad on the exact same operation had lost all of its planes. Rather than conduct a study in the hope of a new hypothesis, Kahneman simply reiterated the importance of expecting some coincidences in life, explaining that absurdly rare things, by definition, occasionally happen.

Though the scientist believing in evolution admits uncertainty, she is probably correct

Theories very rarely result in vast changes in our understanding. According to psychologist Keith Stanovich, it may be the media's overuse of words like "breakthrough" that leads the public to imagine that science is constantly proving everything it thought was true to be false. While there are such famous cases as the theory of relativity that required a complete reconceptualization, these are extreme exceptions. Knowledge in science is gained by a gradual synthesis of information from different experiments, by various researchers, across different domains of science; it is more like a climb than a leap. Theories vary in the extent to which they have been tested and verified, as well as their acceptance in the scientific community. For example, heliocentric theory, the theory of evolution, and germ theory still bear the name "theory" even though, in practice, they are considered factual.

Philosopher Barry Stroud adds that, although the best definition for "knowledge" is contested, being skeptical and entertaining the possibility that one is incorrect is compatible with being correct. Ironically then, the scientist adhering to proper scientific method will doubt themselves even once they possess the truth.

Stanovich also asserts that science avoids searching for a "magic bullet"; it avoids the single cause fallacy. This means a scientist would not ask merely "What is the cause of...", but rather "What are the most significant causes of...". This is especially the case in the more macroscopic fields of science (e.g. psychology, cosmology). Of course, research often analyzes few factors at once, but this always to add to the long list of factors that are most important to consider. For example: knowing the details of only a person's genetics, or their history and upbringing, or the current situation may not explain a behaviour, but a deep understanding of all these variables combined can be very predictive.

Mathematics

Data from the famous Michelson–Morley experiment

Mathematics is essential to the sciences. One important function of mathematics in science is the role it plays in the expression of scientific models. Observing and collecting measurements, as well as hypothesizing and predicting, often require extensive use of mathematics. Arithmetic, algebra, geometry, trigonometry and calculus, for example, are all essential to physics. Virtually every branch of mathematics has applications in science, including "pure" areas such as number theory and topology.

Statistical methods, which are mathematical techniques for summarizing and analyzing data, allow scientists to assess the level of reliability and the range of variation in experimental results. Statistical analysis plays a fundamental role in many areas of both the natural sciences and social sciences.

Computational science applies computing power to simulate real-world situations, enabling a better understanding of scientific problems than formal mathematics alone can achieve. According to the Society for Industrial and Applied Mathematics, computation is now as important as theory and experiment in advancing scientific knowledge.

Whether mathematics itself is properly classified as science has been a matter of some debate. Some thinkers see mathematicians as scientists, regarding physical experiments as inessential or mathematical proofs as equivalent to experiments. Others do not see mathematics as a science, since it does not require an experimental test of its theories and hypotheses. Mathematical theorems and formulas are obtained by logical derivations which presume axiomatic systems, rather than the combination of empirical observation and logical reasoning that has come to be known as scientific method. In general, mathematics is classified as formal science, while natural and social sciences are classified as empirical sciences.

Scientific community

The Meissner effect causes a magnet to levitate above a superconductor

The scientific community consists of the total body of scientists, its relationships and interactions. It is normally divided into "sub-communities" each working on a particular field within science.

Fields

Fields of science are widely recognized categories of specialized expertise, and typically embody their own terminology and nomenclature. Each field will commonly be represented by one or more scientific journal, where peer reviewed research will be published.

Institutions

Louis XIV visiting the Académie des sciences in 1671

Learned societies for the communication and promotion of scientific thought and experimentation have existed since the Renaissance period. The oldest surviving institution is the Accademia dei Lincei in Italy. National Academy of Sciences are distinguished institutions that exist in a number of countries, beginning with the British Royal Society in 1660 and the French Académie des Sciences in 1666.

International scientific organizations, such as the International Council for Science, have since been formed to promote cooperation between the scientific communities of different nations. More recently, influential government agencies have been created to support scientific research, including the National Science Foundation in the U.S.

Other prominent organizations include the National Scientific and Technical Research Council in Argentina, the academies of science of many nations, CSIRO in Australia, Centre national de la recherche scientifique in France, Max Planck Society and Deutsche Forschungsgemeinschaft in Germany, and in Spain, CSIC.

Literature

An enormous range of scientific literature is published. Scientific journals communicate and document the results of research carried out in universities and various other research institutions, serving as an archival record of science. The first scientific journals, Journal des Sçavans followed by the Philosophical Transactions, began publication in 1665. Since that time the total number of active periodicals has steadily increased. As of 1981, one estimate for the number of scientific and technical journals in publication was 11,500. Today Pubmed lists almost 40,000, related to the medical sciences only.

Most scientific journals cover a single scientific field and publish the research within that field; the research is normally expressed in the form of a scientific paper. Science has become so pervasive in modern societies that it is generally considered necessary to communicate the achievements, news, and ambitions of scientists to a wider populace.

Science magazines such as New Scientist, Science & Vie and Scientific American cater to the needs of a much wider readership and provide a non-technical summary of popular areas of research, including notable discoveries and advances in certain fields of research. Science books engage the interest of many more people. Tangentially, the science fiction genre, primarily fantastic in nature, engages the public imagination and transmits the ideas, if not the methods, of science.

Recent efforts to intensify or develop links between science and non-scientific disciplines such as Literature or, more specifically, Poetry, include the Creative Writing Science resource developed through the Royal Literary Fund.

Philosophy of science

Velocity-distribution data of a gas of rubidium atoms, confirming the discovery of a new phase of matter, the Bose–Einstein condensate

The philosophy of science seeks to understand the nature and justification of scientific knowledge. It has proven difficult to provide a definitive account of scientific method that can decisively serve to distinguish science from non-science. Thus there are legitimate arguments about exactly where the borders are, which is known as the problem of demarcation. There is nonetheless a set of core precepts that have broad consensus among published philosophers of science and within the scientific community at large. For example, it is universally agreed that scientific hypotheses and theories must be capable of being independently tested and verified by other scientists in order to become accepted by the scientific community.

There are different schools of thought in the philosophy of scientific method. Methodological naturalism maintains that scientific investigation must adhere to empirical study and independent verification as a process for properly developing and evaluating natural explanations for observable phenomena. Methodological naturalism, therefore, rejects supernatural explanations, arguments from authority and biased observational studies. Critical rationalism instead holds that unbiased observation is not possible and a demarcation between natural and supernatural explanations is arbitrary; it instead proposes falsifiability as the landmark of empirical theories and falsification as the universal empirical method. Critical rationalism argues for the ability of science to increase the scope of testable knowledge, but at the same time against its authority, by emphasizing its inherent fallibility. It proposes that science should be content with the rational elimination of errors in its theories, not in seeking for their verification (such as claiming certain or probable proof or disproof; both the proposal and falsification of a theory are only of methodological, conjectural, and tentative character in critical rationalism). Instrumentalism rejects the concept of truth and emphasizes merely the utility of theories as instruments for explaining and predicting phenomena.

Biologist Stephen J. Gould maintained that certain philosophical propositions—i.e., 1) uniformity of law and 2) uniformity of processes across time and space—must first be assumed before you can proceed as a scientist doing science. Gould summarized this view as follows: "You cannot go to a rocky outcrop and observe either the constancy of nature's laws nor the working of unknown processes. It works the other way around." You first assume these propositions and "then you go to the out crop of rock."

Pseudoscience, fringe science, and junk science

An area of study or speculation that masquerades as science in an attempt to claim a legitimacy that it would not otherwise be able to achieve is sometimes referred to as pseudoscience, fringe science, or "alternative science". Another term, junk science, is often used to describe scientific hypotheses or conclusions which, while perhaps legitimate in themselves, are believed to be used to support a position that is seen as not legitimately justified by the totality of evidence. A variety of commercial advertising, ranging from hype to fraud, may fall into this category.

There also can be an element of political or ideological bias on all sides of such debates. Sometimes, research may be characterized as "bad science", research that is well-intentioned but is seen as incorrect, obsolete, incomplete, or over-simplified expositions of scientific ideas. The term "scientific misconduct" refers to situations such as where researchers have intentionally misrepresented their published data or have purposely given credit for a discovery to the wrong person.

Philosophical critiques

Historian Jacques Barzun termed science "a faith as fanatical as any in history" and warned against the use of scientific thought to suppress considerations of meaning as integral to human existence. Many recent thinkers, such as Carolyn Merchant, Theodor Adorno and E. F. Schumacher considered that the 17th century scientific revolution shifted science from a focus on understanding nature, or wisdom, to a focus on manipulating nature, i.e. power, and that science's emphasis on manipulating nature leads it inevitably to manipulate people, as well. Science's focus on quantitative measures has led to critiques that it is unable to recognize important qualitative aspects of the world.

Philosopher of science Paul K Feyerabend advanced the idea of epistemological anarchism, which holds that there are no useful and exception-free methodological rules governing the progress of science or the growth of knowledge, and that the idea that science can or should operate according to universal and fixed rules is unrealistic, pernicious and detrimental to science itself. Feyerabend advocates treating science as an ideology alongside others such as religion, magic and mythology, and considers the dominance of science in society authoritarian and unjustified. He also contended (along with Imre Lakatos) that the demarcation problem of distinguishing science from pseudoscience on objective grounds is not possible and thus fatal to the notion of science running according to fixed, universal rules.

Feyerabend also criticized Science for not having evidence for its own philosophical precepts. Particularly the notion of Uniformity of Law and the Uniformity of Process across time and space. "We have to realize that a unified theory of the physical world simply does not exist" says Feyerabend, "We have theories that work in restricted regions, we have purely formal attempts to condense them into a single formula, we have lots of unfounded claims (such as the claim that all of chemistry can be reduced to physics), phenomena that do not fit into the accepted framework are suppressed; in physics, which many scientists regard as the one really basic science, we have now at least three different points of view...without a promise of conceptual (and not only formal) unification".

Professor Stanley Aronowitz scrutinizes science for operating with the presumption that the only acceptable criticisms of science are those conducted within the methodological framework that science has set up for itself. That science insists that only those who have been inducted into its community, through means of training and credentials, are qualified to make these criticisms. Aronowitz also alleges that while scientists consider it absurd that Fundamentalist Christianity uses biblical references to bolster their claim that the bible is true, scientists pull the same tactic by using the tools of science to settle disputes concerning its own validity.

Psychologist Carl Jung believed that though science attempted to understand all of nature, the experimental method imposed artificial and conditional questions that evoke equally artificial answers. Jung encouraged, instead of these 'artificial' methods, empirically testing the world in a holistic manner. David Parkin compared the epistemological stance of science to that of divination. He suggested that, to the degree that divination is an epistemologically specific means of gaining insight into a given question, science itself can be considered a form of divination that is framed from a Western view of the nature (and thus possible applications) of knowledge.

Several academics have offered critiques concerning ethics in science. In Science and Ethics, for example, the philosopher Bernard Rollin examines the relevance of ethics to science, and argues in favor of making education in ethics part and parcel of scientific training.

Media perspectives

The mass media face a number of pressures that can prevent them from accurately depicting competing scientific claims in terms of their credibility within the scientific community as a whole. Determining how much weight to give different sides in a scientific debate requires considerable expertise regarding the matter. Few journalists have real scientific knowledge, and even beat reporters who know a great deal about certain scientific issues may know little about other ones they are suddenly asked to cover.

http://en.wikipedia.org/wiki/Science

 

The National State Tax Service University of Ukraine

The National State Tax Service University of Ukraine is located on 23 hectares of a park zone, in 25 km from the capital city of Kyiv in the picturesque town of Irpin.

Having passed the way: technical school - college - institute - academy - National academy - university - National university, our educational establishment with rich 80 year history has become avunique higher educational institution of the early third millennium.

The University is a base educational institution of the state tax service of Ukraine; it has branches in the cities of Zhytomyr, Vinnitsa, Simferopol, Storozhynets (Chernivtsi region), Kamenets - Podilsky (Khmelnitsky region).

Our University trains bachelors, specialists, masters. More than 10 thousand young men and girls from all regions of Ukraine are engaged in the day time and correspondence form of training at the faculties of tax militia, law, accounting-economics, finance, taxation, correspondence study and military training.

The invaluable capital of the University is the pedagogical staff: among 540 highly skilled teachers there are 5 academicians, 55 doctors and 154 candidates of sciences, over 191 professors and docents.

In the structure of the University Kyiv financial and economic college operates for training junior specialists.

Among 500 highly skilled teachers there are 6 academicians, 60 doctors and more than 200 candidates of sciences, 55 professors and 137 associate professors.

Due to modern education-methodical and material bases educational process at the University is carried out at the level of modern education requirements. During its existence the high school has trained almost 60 thousand highly skilled experts.

The Academic council which totals 55 skilled science officers and teachers coordinates the research work. The university has the postgraduate study, specialized councils on training candidates of sciences in the sphere of economy and jurisprudence.

Scientific conferences including international ones are carried out on a regular basis. Irpin international pedagogical readings have become traditional events. Members of the scientific students and cadets organization repeatedly became winners of All-Ukrainian and international conferences.

Teachers of the University take part in the pedagogical experiment on introducing credit - modular system of the educational process organization. In our high school there is the Coordination council whose functions include support and generalization of the results of realizing Bologna declaration regulations.

The important role in educational process is played by the library which book fund totals over 170 thousand copies.

The Information-publishing center carries out release of the "Tax academy" newspaper and the "Naukovy visnyk" collection of scientific works, it removes video materials about our high school life, accompanies the University website, issues books and manuals.

Activity of the Student's administration covers training, private students' life, leisure, participation in public life, scientific researches, and amateur creativity.

"Suzirja" Cultural-art center of the National STS University of Ukraine consists of 20 many genres creative collectives and studios, it is a holiday of music, poetry and dance. The creative collective is known not only in Ukraine, but in Poland, Cyprus, Germany, Spain, the USA, and Canada as well.

Sportsmen who study and are brought up in the high school are known all over the world. Among the students of the university there are 20 sports masters of international class who were participants and prize-winners of Olympic Games, university games, the world championships, Europe and Ukraine.

At the university all conditions not only for training, but also for residing and rest have been created. The apartment houses for the faculty, Information centre, the students' hostel have been constructed. The educational areas of tax militia and military training faculties are being extended.

The high school has its own medicosanitary department consisting of the polyclinic branch, the day time hospital and clinical laboratory.

Life at the National STS university is raging. You are welcome to visit us, and you'll feel by yourselves the participants of these impetuous and interesting events.

The post address: National STS university of Ukraine, K.Marx str., 31, Irpin, Kyiv region, Ukraine, 08201

 

Phone: (+3804497) 57571 Fax: (+3804497) 60294 

E-mail: admin@asta.edu.ua 

 

         

 

                                                       Infotech

 

 

The Shamanistic Tradition

 

[Next] Index Prev

 

The start of the modern science that we call "Computer Science" can be traced back to a long ago age where man still dwelled in caves or in the forest, and lived in groups for protection and survival from the harsher elements on the Earth. Many of these groups possessed some primitive form of animistic religion; they worshipped the sun, the moon, the trees, or sacred animals. Within the tribal group was one individual to whom fell the responsibility for the tribe's spiritual welfare. It was he or she who decided when to hold both the secret and public religious ceremonies, and interceded with the spirits on behalf of the tribe. In order to correctly hold the ceremonies to ensure good harvest in the fall and fertility in the spring, the shamans needed to be able to count the days or to track the seasons. From the shamanistic tradition, man developed the first primitive counting mechanisms -- counting notches on sticks or marks on walls.

   

 

 

A Primitive Calendar

 

Next Index Links Prev

 

From the caves and the forests, man slowly evolved and built structures such as Stonehenge. Stonehenge, which lies 13km north of Salisbury, England, is believed to have been an ancient form of calendar designed to capture the light from the summer solstice in a specific fashion. The solstices have long been special days for various religious groups and cults. Archeologists and anthropologists today are not quite certain how the structure, believed to have been built about 2800 B.C., came to be erected since the technology required to join together the giant stones and raise them upright seems to be beyond the technological level of the Britons at the time. It is widely believed that the enormous edifice of stone may have been erected by the Druids. Regardless of the identity of the builders, it remains today a monument to man's intense desire to count and to track the occurrences of the physical world around him.

 

 

A Primitive Calculator

 

Next Index Prev

 

Meanwhile in Asia, the Chinese were becoming very involved in commerce with the Japanese, Indians, and Koreans. Businessmen needed a way to tally accounts and bills. Somehow, out of this need, the abacus was born. The abacus is the first true precursor to the adding machines and computers which would follow. It worked somewhat like this:

The value assigned to each pebble (or bead, shell, or stick) is determined not by its shape but by its position: one pebble on a particular line or one bead on a particular wire has the value of 1; two together have the value of 2. A pebble on the next line, however, might have the value of 10, and a pebble on the third line would have the value of 100. Therefore, three properly placed pebbles--two with values of 1 and one with the value of 10--could signify 12, and the addition of a fourth pebble with the value of 100 could signify 112, using a place-value notational system with multiples of 10.

Thus, the abacus works on the principle of place-value notation: the location of the bead determines its value. In this way, relatively few beads are required to depict large numbers. The beads are counted, or given numerical values, by shifting them in one direction. The values are erased (freeing the counters for reuse) by shifting the beads in the other direction. An abacus is really a memory aid for the user making mental calculations, as opposed to the true mechanical calculating machines which were still to come.

 

Forefathers of Computing

For over a thousand years after the Chinese invented the abacus, not much progress was made to automate counting and mathematics. The Greeks came up with numerous mathematical formulae and theorems, but all of the newly discovered math had to be worked out by hand. A mathematician was often a person who sat in the back room of an establishment with several others and they worked on the same problem. The redundant personnel working on the same problem were there to ensure the correctness of the answer. It could take weeks or months of labourious work by hand to verify the correctness of a proposed theorem. Most of the tables of integrals, logarithms, and trigonometric values were worked out this way, their accuracy unchecked until machines could generate the tables in far less time and with more accuracy than a team of humans could ever hope to achieve.

 

The First Mechanical Calculator

Blaise Pascal, noted mathematician, thinker, and scientist, built the first mechanical adding machine in 1642 based on a design described by Hero of Alexandria (2AD) to add up the distance a carriage travelled. The basic principle of his calculator is still used today in water meters and modern-day odometers. Instead of having a carriage wheel turn the gear, he made each ten-teeth wheel accessible to be turned directly by a person's hand (later inventors added keys and a crank), with the result that when the wheels were turned in the proper sequences, a series of numbers was entered and a cumulative sum was obtained. The gear train supplied a mechanical answer equal to the answer that is obtained by using arithmetic.

This first mechanical calculator, called the Pascaline, had several disadvantages. Although it did offer a substantial improvement over manual calculations, only Pascal himself could repair the device and it cost more than the people it replaced! In addition, the first signs of technophobia emerged with mathematicians fearing the loss of their jobs due to progress.  

 

The Difference Engine

 

Next Index Detour Links Prev

 

While Tomas of Colmar was developing the first successful commercial calculator, Charles Babbage realized as early as 1812 that many long computations consisted of operations that were regularly repeated. He theorized that it must be possible to design a calculating machine which could do these operations automatically. He produced a prototype of this "difference engine" by 1822 and with the help of the British government started work on the full machine in 1823. It was intended to be steam-powered; fully automatic, even to the printing of the resulting tables; and commanded by a fixed instruction program.

 

 

The Conditional

 

Next Index Prev

In 1833, Babbage ceased working on the difference engine because he had a better idea. His new idea was to build an "analytical engine." The analytical engine was a real parallel decimal computer which would operate on words of 50 decimals and was able to store 1000 such numbers. The machine would include a number of built-in operations such as conditional control, which allowed the instructions for the machine to be executed in a specific order rather than in numerical order. The instructions for the machine were to be stored on punched cards, similar to those used on a Jacquard loom.

 

 

 

Herman Hollerith

 

Next Index Prev

 

A step toward automated computation was the introduction of punched cards, which were first successfully used in connection with computing in 1890 by Herman Hollerith working for the U.S. Census Bureau. He developed a device which could automatically read census information which had been punched onto card. Surprisingly, he did not get the idea from the work of Babbage, but rather from watching a train conductor punch tickets. As a result of his invention, reading errors were consequently greatly reduced, work flow was increased, and, more important, stacks of punched cards could be used as an accessible memory store of almost unlimited capacity; furthermore, different problems could be stored on different batches of cards and worked on as needed. Hollerith's tabulator became so successful that he started his own firm to market the device; this company eventually became International Business Machines (IBM).

 

 

 

Binary Representation

 

Next Index Prev

 

Hollerith's machine though had limitations. It was strictly limited to tabulation. The punched cards could not be used to direct more complex computations. In 1941, Konrad Zuse(*), a German who had developed a number of calculating machines, released the first programmable computer designed to solve complex engineering equations. The machine, called the Z3, was controlled by perforated strips of discarded movie film. As well as being controllable by these celluloid strips, it was also the first machine to work on the binary system, as opposed to the more familiar decimal system.

The binary system is composed of 0s and 1s. A punch card with its two states--a hole or no hole-- was admirably suited to representing things in binary. If a hole was read by the card reader, it was considered to be a 1. If no hole was present in a column, a zero was appended to the current number. The total number of possible numbers can be calculated by putting 2 to the power of the number of bits in the binary number. A bit is simply a single occurrence of a binary number--a 0 or a 1. Thus, if you had a possible binary number of 6 bits, 64 different numbers could be generated. (2^(n-1))

Binary representation was going to prove important in the future design of computers which took advantage of a multitude of two-state devices such card readers, electric circuits which could be on or off, and vacuum tubes.

* Zuse died in January of 1996.

 

 

Harvard Mark I

 

Next Index Prev

 

By the late 1930s punched-card machine techniques had become so well established and reliable that Howard Aiken, in collaboration with engineers at IBM, undertook construction of a large automatic digital computer based on standard IBM electromechanical parts. Aiken's machine, called the Harvard Mark I, handled 23-decimal-place numbers (words) and could perform all four arithmetic operations; moreover, it had special built-in programs, or subroutines, to handle logarithms and trigonometric functions. The Mark I was originally controlled from pre-punched paper tape without provision for reversal, so that automatic "transfer of control" instructions could not be programmed. Output was by card punch and electric typewriter. Although the Mark I used IBM rotating counter wheels as key components in addition to electromagnetic relays, the machine was classified as a relay computer. It was slow, requiring 3 to 5 seconds for a multiplication, but it was fully automatic and could complete long computations without human intervention. The Harvard Mark I was the first of a series of computers designed and built under Aiken's direction.

 

Alan Turing

 

Next Index Links Prev

 

Meanwhile, over in Great Britain, the British mathematician Alan Turing wrote a paper in 1936 entitled On Computable Numbers in which he described a hypothetical device, a Turing machine, that presaged programmable computers. The Turing machine was designed to perform logical operations and could read, write, or erase symbols written on squares of an infinite paper tape. This kind of machine came to be known as a finite state machine because at each step in a computation, the machine's next action was matched against a finite instruction list of possible states.

The Turing machine pictured here above the paper tape, reads in the symbols from the tape one at a time. What we would like the machine to do is to give us an output of 1 anytime it has read at least 3 ones in a row off of the tape. When there are not at least three ones, then it should output a 0. The reading and outputting can go on infinitely. The diagram with the labelled states is known a state diagram and provides a visual path of the possible states that the machine can enter, dependent upon the input. The red arrowed lines indicate an input of 0 from the tape to the machine. The blue arrowed lines indicate an input of 1. Output from the machine is labelled in neon green.

 

 

 

The Turing Machine

Next Index Links Prev

 

The machine starts off in the Start state. The first input is a 1 so we can follow the blue line to State 1; output is going to be 0 because three or more ones have not yet been read in. The next input is a 0, which leads the machine back to the starting state by following the red line. The read/write head on the Turing machine advances to the next input, which is a 1. Again, this returns the machine to State 1 and the read/write head advances to the next symbol on the tape. This too is also a 1, leading to State 2. The machine is still outputting 0's since it has not yet encountered three 1s in a row. The next input is also a 1 and following the blue line leads in to State 3, and the machine now outputs a 1 as it has read in at least 3 ones. From this point, as long as the machine keeps reading in 1s, it will stay in State 3 and continue to output 1s. If any 0s are encountered, the machine will return to the Start state and start counting 1s all over.

Turing's purpose was not to invent a computer, but rather to describe problems which are logically possible to solve. His hypothetical machine, however, foreshadowed certain characteristics of modern computers that would follow. For example, the endless tape could be seen as a form of general purpose internal memory for the machine in that the machine was able to read, write, and erase it--just like modern day RAM.

 

 

 

ENIAC

 

Next Index Prev

 

Back in America, with the success of Aiken's Harvard Mark-I as the first major American development in the computing race, work was proceeding on the next great breakthrough by the Americans. Their second contribution was the development of the giant ENIAC machine by John W. Mauchly and J. Presper Eckert at the University of Pennsylvania. ENIAC (Electrical Numerical Integrator and Computer) used a word of 10 decimal digits instead of binary ones like previous automated calculators/computers. ENIAC also was the first machine to use more than 2,000 vacuum tubes, using nearly 18,000 vacuum tubes. Storage of all those vacuum tubes and the machinery required to keep the cool took up over 167 square meters (1800 square feet) of floor space. Nonetheless, it had punched-card input and output and arithmetically had 1 multiplier, 1 divider-square rooter, and 20 adders employing decimal "ring counters," which served as adders and also as quick-access (0.0002 seconds) read-write register storage.

The executable instructions composing a program were embodied in the separate units of ENIAC, which were plugged together to form a route through the machine for the flow of computations. These connections had to be redone for each different problem, together with presetting function tables and switches. This "wire-your-own" instruction technique was inconvenient, and only with some license could ENIAC be considered programmable; it was, however, efficient in handling the particular programs for which it had been designed. ENIAC is generally acknowledged to be the first successful high-speed electronic digital computer (EDC) and was productively used from 1946 to 1955. A controversy developed in 1971, however, over the patentability of ENIAC's basic digital concepts, the claim being made that another U.S. physicist, John V. Atanasoff, had already used the same ideas in a simpler vacuum-tube device he built in the 1930s while at Iowa State College. In 1973, the court found in favor of the company using Atanasoff claim and Atanasoff received the acclaim he rightly deserved.

 

John von Neumann

In 1945, mathematician John von Neumann undertook a study of computation that demonstrated that a computer could have a simple, fixed structure, yet be able to execute any kind of computation given properly programmed control without the need for hardware modification. Von Neumann contributed a new understanding of how practical fast computers should be organized and built; these ideas, often referred to as the stored-program technique, became fundamental for future generations of high-speed digital computers and were universally adopted. The primary advance was the provision of a special type of machine instruction called conditional control transfer--which permitted the program sequence to be interrupted and reinitiated at any point, similar to the system suggested by Babbage for his analytical engine--and by storing all instruction programs together with data in the same memory unit, so that, when desired, instructions could be arithmetically modified in the same way as data. Thus, data was the same as program.

As a result of these techniques and several others, computing and programming became faster, more flexible, and more efficient, with the instructions in subroutines performing far more computational work. Frequently used subroutines did not have to be reprogrammed for each new problem but could be kept intact in "libraries" and read into memory when needed. Thus, much of a given program could be assembled from the subroutine library. The all-purpose computer memory became the assembly place in which parts of a long computation were stored, worked on piecewise, and assembled to form the final results. The computer control served as an errand runner for the overall process. As soon as the advantages of these techniques became clear, the techniques became standard practice. The first generation of modern programmed electronic computers to take advantage of these improvements appeared in 1947.

This group included computers using random access memory (RAM), which is a memory designed to give almost constant access to any particular piece of information. These machines had punched-card or punched-tape input and output devices and RAMs of 1,000-word. Physically, they were much more compact than ENIAC: some were about the size of a grand piano and required 2,500 small electron tubes, far fewer than required by the earlier machines. The first- generation stored-program computers required considerable maintenance, attained perhaps 70% to 80% reliable operation, and were used for 8 to 12 years. Typically, they were programmed directly in machine language, although by the mid-1950s progress had been made in several aspects of advanced programming. This group of machines included EDVAC and UNIVAC, the first commercially available computers.

 

 

EDVAC

 

Next Index Prev Image taken originally from The Image Server (http://web.soi.city.ac.uk/archive/image/lists/computers.html)

 

EDVAC (Electronic Discrete Variable Automatic Computer) was to be a vast improvement upon ENIAC. Mauchly and Eckert started working on it two years before ENIAC even went into operation. Their idea was to have the program for the computer stored inside the computer. This would be possible because EDVAC was going to have more internal memory than any other computing device to date. Memory was to be provided through the use of mercury delay lines. The idea being that given a tube of mercury, an electronic pulse could be bounced back and forth to be retrieved at will--another two state device for storing 0s and 1s. This on/off switchability for the memory was required because EDVAC was to use binary rather than decimal numbers, thus simplifying the construction of the arithmetic units.

 

 

 

Technology Advances

 

Next Index Prev

 

In the 1950s,two devices would be invented which would improve the computer field and cause the beginning of the computer revolution. The first of these two devices was the transistor. Invented in 1947 by William Shockley, John Bardeen, and Walter Brattain of Bell Labs, the transistor was fated to oust the days of vacuum tubes in computers, radios, and other electronics.

The vacuum tube, used up to this time in almost all the computers and calculating machines, had been invented by American physicist Lee De Forest in 1906. The vacuum tube worked by using large amounts of electricity to heat a filament inside the tube until it was cherry red. One result of heating this filament up was the release of electrons into the tube, which could be controlled by other elements within the tube. De Forest's original device was a triode, which could control the flow of electrons to a positively charged plate inside the tube. A zero could then be represented by the absence of an electron current to the plate; the presence of a small but detectable current to the plate represented a one.

Vacuum tubes were highly inefficient, required a great deal of space, and needed to be replaced often. Computers such as ENIAC had 18,000 tubes in them and housing all these tubes and cooling the rooms from the heat produced by 18,000 tubes was not cheap.. The transistor promised to solve all of these problems and it did so. Transistors, however, had their problems too. The main problem was that transistors, like other electronic components, needed to be soldered together. As a result, the more complex the circuits became, the more complicated and numerous the connections between the individual transistors and the likelihood of faulty wiring increased.

In 1958, this problem too was solved by Jack St. Clair Kilby of Texas Instruments. He manufactured the first integrated circuit or chip. A chip is really a collection of tiny transistors which are connected together when the transistor is manufactured. Thus, the need for soldering together large numbers of transistors was practically nullified; now only connections were needed to other electronic components. In addition to saving space, the speed of the machine was now increased since there was a diminished distance that the electrons had to follow.

 

The Altair

 

Next Index Prev

 

 In 1971, Intel released the first microprocessor. The microprocessor was a specialized integrated circuit which was able to process four bits of data at a time. The chip included its own arithmetic logic unit, but a sizable portion of the chip was taken up by the control circuits for organizing the work, which left less room for the data-handling circuitry. Thousands of hackers could now aspire to own their own personal computer. Computers up to this point had been strictly the legion of the military, universities, and very large corporations simply because of their enormous cost for the machine and then maintenance. In 1975, the cover of Popular Electronics featured a story on the "world's first minicomputer kit to rival commercial models....Altair 8800." The Altair, produced by a company called Micro Instrumentation and Telementry Systems (MITS) retailed for $397, which made it easily affordable for the small but growing hacker community.

The Altair was not designed for your computer novice. The kit required assembly by the owner and then it was necessary to write software for the machine since none was yet commercially available. The Altair had a 256 byte memory--about the size of a paragraph, and needed to be coded in machine code- -0s and 1s. The programming was accomplished by manually flipping switches located on the front of the Altair.

 

Creation of Microsoft

Two young hackers were intrigued by the Altair, having seen the article in Popular Electronics. They decided on their own that the Altair needed software and took it upon themselves to contact MITS owner Ed Roberts and offer to provide him with a BASIC which would run on the Altair. BASIC (Beginners All-purpose Symbolic Instruction Code) had originally been developed in 1963 by Thomas Kurtz and John Kemeny, members of the Dartmouth mathematics department. BASIC was designed to provide an interactive, easy method for upcoming computer scientists to program computers. It allowed the usage of statements such as print "hello" or let b=10. It would be a great boost for the Altair if BASIC were available, so Robert's agreed to pay for it if it worked. The two young hackers worked feverishly and finished just in time to present it to Roberts. It was a success. The two young hackers? They were William Gates and Paul Allen. They later went on to form Microsoft and produce BASIC and operating systems for various machines.

 

BASIC & Other Languages

BASIC was not the only game in town. By this time, a number of other specialized and general-purpose languages had been developed. A surprising number of today's popular languages have actually been around since the 1950s. FORTRAN, developed by a team of IBM programmers, was one of the first high- level languages--languages in which the programmer does not have to deal with the machine code of 0s and 1s. It was designed to express scientific and mathematical formulas. For a high-level language, it was not very easy to program in. Luckily, better languages came along.

In 1958, a group of computer scientists met in Zurich and from this meeting came ALGOL--ALGOrithmic Language. ALGOL was intended to be a universal, machine-independent language, but they were not successful as they did not have the same close association with IBM as did FORTRAN. A derivative of ALGOL-- ALGOL-60--came to be known as C, which is the standard choice for programming requiring detailed control of hardware. After that came COBOL--COmmon Business Oriented Language. COBOL was developed in 1960 by a joint committee. It was designed to produce applications for the business world and had the novice approach of separating the data descriptions from the actual program. This enabled the data descriptions to be referred to by many different programs.

In the late 1960s, a Swiss computer scientist, Niklaus Wirth, would release the first of many languages. His first language, called Pascal, forced programmers to program in a structured, logical fashion and pay close attention to the different types of data in use. He later followed up on Pascal with Modula-II and III, which were very similar to Pascal in structure and syntax.

 

The PC Explosion

Following the introduction of the Altair, a veritable explosion of personal computers occurred, starting with Steve Jobs and Steve Wozniak exhibiting the first Apple II at the First West Coast Computer Faire in San Francisco. The Apple II boasted built-in BASIC, colour graphics, and a 4100 character memory for only $1298. Programs and data could be stored on an everyday audio- cassette recorder. Before the end of the fair, Wozniak and Jobs had secured 300 orders for the Apple II and from there Apple just took off.

Also introduced in 1977 was the TRS-80. This was a home computer manufactured Tandy Radio Shack. In its second incarnation, the TRS-80 Model II, came complete with a 64,000 character memory and a disk drive to store programs and data on. At this time, only Apple and TRS had machines with disk drives. With the introduction of the disk drive, personal computer applications took off as a floppy disk was a most convenient publishing medium for distribution of software.

IBM, which up to this time had been producing mainframes and minicomputers for medium to large-sized businesses, decided that it had to get into the act and started working on the Acorn, which would later be called the IBM PC. The PC was the first computer designed for the home market which would feature modular design so that pieces could easily be added to the architecture. Most of the components, surprisingly, came from outside of IBM, since building it with IBM parts would have cost too much for the home computer market. When it was introduced, the PC came with a 16,000 character memory, keyboard from an IBM electric typewriter, and a connection for tape cassette player for $1265.

By 1984, Apple and IBM had come out with new models. Apple released the first generation Macintosh, which was the first computer to come with a graphical user interface(GUI) and a mouse. The GUI made the machine much more attractive to home computer users because it was easy to use. Sales of the Macintosh soared like nothing ever seen before. IBM was hot on Apple's tail and released the 286-AT, which with applications like Lotus 1-2-3, a spreadsheet, and Microsoft Word, quickly became the favourite of business concerns.

That brings us up to about ten years ago. Now people have their own personal graphics workstations and powerful home computers. The average computer a person might have in their home is more powerful by several orders of magnitude than a machine like ENIAC. The computer revolution has been the fastest growing technology in man's history.

 

PCs Today

As an example of the wonders of this modern-day technology, let's take a look at this presentation. The whole presentation from start to finish was prepared on a variety of computers using a variety of different software applications. An application is any program that a computer runs that enables you to get things done. This includes things like word processors for creating text, graphics packages for drawing pictures, and communication packages for moving data around the globe.

The colour slides that you have been looking at were prepared on an IBM 486 machine running Microsoft® Windows® 3.1. Windows is a type of operating system. Operating systems are the interface between the user and the computer, enabling the user to type high-level commands such as "format a:" into the computer, rather that issuing complex assembler or C commands. Windows is one of the numerous graphical user interfaces around that allows the user to manipulate their environment using a mouse and icons. Other examples of graphical user interfaces (GUIs) include X-Windows, which runs on UNIX® machines, or Mac OS X, which is the operating system of the Macintosh.

Once Windows was running, I used a multimedia tool called Freelance Graphics to create the slides. Freelance, from Lotus Development Corporation, allows the user to manipulate text and graphics with the explicit purpose of producing presentations and slides. It contains drawing tools and numerous text placement tools. It also allows the user to import text and graphics from a variety of sources. A number of the graphics used, for example, the shaman, are from clip art collections off of a CD-ROM.

The text for the lecture was also created on a computer. Originally, I used Microsoft® Word, which is a word processor available for the Macintosh and for Windows machines. Once I had typed up the lecture, I decided to make it available, slides and all, electronically by placing the slides and the text onto my local Web server.

 

The Web

The Web (or more properly, the World Wide Web) was developed at CERN in Switzerland as a new form of communicating text and graphics across the Internet making use of the hypertext markup language (HTML) as a way to describe the attributes of the text and the placement of graphics, sounds, or even movie clips. Since it was first introduced, the number of users has blossomed and the number of sites containing information and searchable archives has been growing at an unprecedented rate. It is now even possible to order your favourite Pizza Hut pizza in Santa Cruz via the Web!

 

Servers

The actual workings of a Web server are beyond the scope of this course but knowledge of two things is important: 1) In order to use the Web, someone needs to be running a Web server on a machine for which such a server exists; and 2) the local user needs to run an application program to connect to the server; this application is known as a client program. Server programs are available for many types of computers and operating systems, such as Apache for UNIX (and other operating systems), Microsoft Information Interchange Server (IIS) for Windows/NT, and WebStar for the Macintosh. Most client programs available today are capable of displaying images, playing music, or showing movies, and they make use of a graphic interface with a mouse. Common client programs include Netscape, Opera, and Microsoft Internet Explorer (for Windows/Macintosh computers). There are also special clients that only display text, like lynx for UNIX systems, or help the visually impaired.

As mentioned earlier, servers contain files full of information about courses, research interests and games, for example. All of this information is formatted in a language called HTML (hypertext markup language.) HTML allows the user to insert formatting directives into the text, much like some of the first word processors for home computers. Anyone who is currently taking English 100 or has taken English 100 knows that there is a specific style and format for submitting essays. The same is true of HTML documents.

More information about HTML is now readily available everywhere, including in your local bookstore.

This brings us to the conclusion of this lecture. Please e-mail any comments you have to thelectureATeingangNOSPAMorg (replace AT with @ and NOSPAM with .).

 

 

httr://www.eingang.org/Lecture

 

 

httr://www.cambridge.org/elt/infotech

 

How Removable Storage Works

 

A tiny hard drive powers this removable storage device. See more computer memory pictures.

Removable storage has been around almost as long as the computer itself. Early removable storage was based on magnetic tape like that used by an audio cassette. Before that, some computers even used paper punch cards to store information!

We've come a long way since the days of punch cards. New removable storage devices can store hundreds of megabytes (and even gigabytes) of data on a single disk, cassette, card or cartridge. In this article, you will learn about the three major storage technologies. We'll also talk about which devices use each technology and what the future holds for this medium. But first, let's see why you would want removable storage.

 

Portable Memory

There are several reasons why removable storage is useful:

• Commercial software

• Making back-up copies of important information

• Transporting data between two computers

• Storing software and information that you don't need to access constantly

• Copying information to give to someone else

• Securing information that you don't want anyone else to access

Modern removable storage devices offer an incredible number of options, with storage capacities ranging from the 1.44 megabytes (MB) of a standard floppy to the upwards of 20-gigabyte (GB) capacity of some portable drives. All of these devices fall into one of three categories:

• Magnetic storage

• Optical storage

• Solid-state storage

In the following sections, we will take an in-depth look at each of these technologies.

 

Magnetic Storage

The most common and enduring form of removable-storage technology is magnetic storage. For example, 1.44-MB floppy-disk drives using 3.5-inch diskettes have been around for about 15 years, and they are still found on almost every computer sold today. In most cases, removable magnetic storage uses a drive, which is a mechanical device that connects to the computer. You insert the media, which is the part that actually stores the information, into the drive.

Just like a hard drive, the media used in removable magnetic-storage devices is coated with iron oxide. This oxide is a ferromagnetic material, meaning that if you expose it to a magnetic field it is permanently magnetized. The media is typically called a disk or a cartridge. The drive uses a motor to rotate the media at a high speed, and it accesses (reads) the stored information using small devices called heads.

Each head has a tiny electromagnet, which consists of an iron core wrapped with wire. The electromagnet applies a magnetic flux to the oxide on the media, and the oxide permanently "remembers" the flux it sees. During writing, the data signal is sent through the coil of wire to create a magnetic field in the core. At the gap, the magnetic flux forms a fringe pattern. This pattern bridges the gap, and the flux magnetizes the oxide on the media. When the data is read by the drive, the read head pulls a varying magnetic field across the gap, creating a varying magnetic field in the core and therefore a signal in the coil. This signal is then sent to the computer as binary data.

 

Magnetic: Direct Access

Magnetic disks or cartridges have a few things in common:

• They use a thin plastic or metal base material coated with iron oxide.

• They can record information instantly.

• They can be erased and reused many times.

• They are reasonably inexpensive and easy to use.

If you have ever used an audio cassette, you know that it has one big disadvantage -- it is a sequential device. The tape has a beginning and an end, and to move the tape to later song you have to use the fast forward and rewind buttons to find the start of the song. This is because the tape heads are stationary.

A disk or cartridge, like a cassette tape, is made from a thin piece of plastic coated with magnetic material on both sides. However, it is shaped like a disk rather than a long, thin ribbon. The tracks are arranged in concentric rings so the software can jump from "file 1" to "file 19" without having to fast forward through files 2 through 18. The disk or cartridge spins like a record and the heads move to the correct track, providing what is known as direct-access storage. Some removable devices actually have a platter of magnetic disks, similar to the set-up in a hard drive. Tape is still used for some long-term storage, such as backing up a server's hard drive, in which quick access to the data is not essential.

In the illustration above, you can see how the disk is divided into tracks (brown) and sectors (yellow).

 

The read/write heads ("writing" is saving new information to the storage media) do not touch the media when the heads are traveling between tracks. There is normally some type of mechanism that you can set to protect a disk or cartridge from being written to. For example, electronic optics check for the presence of an opening in the lower corner of a 3.5-inch diskette (or a notch in the side of a 5.25-inch diskette) to see if the user wants to prevent data from being written to it.

 

Magnetic: Zip

Over the years, magnetic technology has improved greatly. Because of the immense popularity and low cost of floppy disks, higher-capacity removable storage has not been able to completely replace the floppy drive. But there are a number of alternatives that have become very popular in their own right. One such example is the Zip from Iomega.

 

The Zip drive comes in several configurations, including SCSI, USB, parallel port and internal ATAPI.

 

The main thing that separates a Zip disk from a floppy disk is the magnetic coating used. On a Zip disk, the coating is of a much higher quality. The higher-quality coating means that the read/write head on a Zip disk can be significantly smaller than on a floppy disk (by a factor of 10 or so). The smaller head, in conjunction with a head-positioning mechanism that is similar to the one used in a hard disk, means that a Zip drive can pack thousands of tracks per inch on the disk surface. Zip drives also use a variable number of sectors per track to make the best use of disk space. All of these features combine to create a floppy disk that holds a huge amount of data -- up to 750 MB at the moment.

 

Magnetic: Cartridges

Another method of using magnetic technology for removable storage is essentially taking a hard disk and putting it in a self-contained case. One of the more successful products using this method is the Iomega Jaz. Each Jaz cartridge is basically a hard disk, with several platters, contained in a hard, plastic case. The cartridge contains neither the heads nor the motor for spinning the disk; both of these items are in the drive unit.

 

The current Jaz drive uses 2-GB cartridges, but also accepts the 1-GB cartridge used by the original Jaz.

 

Magnetic: Portable Drives

Completely external, portable hard drives are quickly becoming popular, due in a great part to USB technology. These units, like the ones inside a typical PC, have the drive mechanism and the media all in one sealed case. The drive connects to the PC via USB cable and, after the driver software is installed the first time, is automatically listed by Windows as an available drive.

 

This 20-GB Pockey Drive fits in the palm of your hand.

 

Another type of portable hard drive is called a microdrive. These tiny hard drives are built into PCMCIA cards that can be plugged into any device with a PCMCIA slot, such as a laptop computer.

 

This microdrive holds 340 MB and is about the size of a matchbox.

 

You can read more about magnetic storage in How Hard Disks Work and How Tape Recorders Work. To learn about optical storage technology, check out the next page.

 

Optical Storage

The optical storage device that most of us are familiar with is the compact disc (CD). A CD can store huge amounts of digital information (783 MB) on a very small surface that is incredibly inexpensive to manufacture. The design that makes this possible is a simple one: The CD surface is a mirror covered with billions of tiny bumps that are arranged in a long, tightly wound spiral. The CD player reads the bumps with a precise laser and interprets the information as bits of data.

The spiral of bumps on a CD starts in the center. CD tracks are so small that they have to be measured in microns (millionths of a meter). The CD track is approximately 0.5 microns wide, with 1.6 microns separating one track from the next. The elongated bumps are each 0.5 microns wide, a minimum of 0.83 microns long and 125 nanometers (billionths of a meter) high.

Most of the mass of a CD is an injection-molded piece of clear polycarbonate plastic that is about 1.2 millimeters thick. During manufacturing, this plastic is impressed with the microscopic bumps that make up the long, spiral track. A thin, reflective aluminum layer is then coated on the top of the disc, covering the bumps. The tricky part of CD technology is reading all the tiny bumps correctly, in the right order and at the right speed. To do all of this, the CD player has to be exceptionally precise when it focuses the laser on the track of bumps.

When you play a CD, the laser beam passes through the CD's polycarbonate layer, reflects off the aluminum layer and hits an optoelectronic device that detects changes in light. The bumps reflect light differently than the flat parts of the aluminum layer, which are called lands. The optoelectronic sensor detects these changes in reflectivity, and the electronics in the CD-player drive interpret the changes as data bits.

 

The basic parts of a compact-disc player

 

Optical: CD-R/CD-RW

That is how a normal CD works, which is great for prepackaged software, but no help at all as removable storage for your own files. That's where CD-recordable (CD-R) and CD-rewritable (CD-RW) come in.

CD-R works by replacing the aluminum layer in a normal CD with an organic dye compound. This compound is normally reflective, but when the laser focuses on a spot and heats it to a certain temperature, it "burns" the dye, causing it to darken. When you want to retrieve the data you wrote to the CD-R, the laser moves back over the disc and thinks that each burnt spot is a bump. The problem with this approach is that you can only write data to a CD-R once. After the dye has been burned in a spot, it cannot be changed back.

CD-RW fixes this problem by using phase change, which relies on a very special mixture of antimony, indium, silver and tellurium. This particular compound has an amazing property: When heated to one temperature, it crystallizes as it cools and becomes very reflective; when heated to another, higher temperature, the compound does not crystallize when it cools and so becomes dull in appearance.

 

The Predator is a fast CD-RW drive from Iomega.

 

CD-RW drives have three laser settings to make use of this property:

• Read - The normal setting that reflects light to the optoelectronic sensor

• Erase - The laser set to the temperature needed to crystallize the compound

• Write - The laser set to the temperature needed to de-crystallize the compound

Other optical devices that deviate from the CD standard, such as DVD, employ approaches comparable to CD-R and CD-RW. An older, hybrid technology called magneto-optical (MO) is seldom used anymore. MO uses a laser to heat the surface of the media. Once the surface reaches a particular temperature, a magnetic head moves across the media, changing the polarity of the particles as needed.

http://computer.howstuffworks.com/removable-storage.htm

 

Word processor

A word processor (more formally known as document preparation system) is a computer application used for the production (including composition, editing, formatting, and possibly printing) of any sort of printable material.

Word processor may also refer to a type of stand-alone office machine, popular in the 1970s and 1980s, combining the keyboard text-entry and printing functions of an electric typewriter with a dedicated processor (like a computer processor) for the editing of text. Although features and design varied between manufacturers and models, with new features added as technology advanced, word processors for several years usually featured a monochrome display and the ability to save documents on memory cards or diskettes. Later models introduced innovations such as spell-checking programs, increased formatting options, and dot-matrix printing. As the more versatile combination of a personal computer and separate printer became commonplace, most business-machine companies stopped manufacturing the word processor as a stand-alone office machine. As of 2009 there were only two U.S. companies, Classic and AlphaSmart, which still made stand-alone word processors.[1] Many older machines, however, remain in use.

Word processors are descended from early text formatting tools (sometimes called text justification tools, from their only real capability). Word processing was one of the earliest applications for the personal computer in office productivity.

Although early word processors used tag-based markup for document formatting, most modern word processors take advantage of a graphical user interface providing some form of What You See Is What You Get editing. Most are powerful systems consisting of one or more programs that can produce any arbitrary combination of images, graphics and text, the latter handled with type-setting capability.

Microsoft Word is the most widely used word processing software. Microsoft estimates that over 500,000,000 people use the Microsoft Office suite,[2] which includes Word. Many other word processing applications exist, including WordPerfect (which dominated the market from the mid-1980s to early-1990s on computers running Microsoft's MS-DOS operating system) and open source applications OpenOffice.org Writer, AbiWord, KWord, and LyX. Web-based word processors, such as Google Docs, are a relatively new category.

 

                                          Word processing

 

Characteristics

Word processing typically implies the presence of text manipulation functions that extend beyond a basic ability to enter and change text, such as automatic generation of:

• batch mailings using a form letter template and an address database (also called mail merging);

• indices of keywords and their page numbers;

• tables of contents with section titles and their page numbers;

• tables of figures with caption titles and their page numbers;

• cross-referencing with section or page numbers;

• footnote numbering;

• new versions of a document using variables (e.g. model numbers, product names, etc.)

Other word processing functions include "spell checking" (actually checks against wordlists), "grammar checking" (checks for what seem to be simple grammar errors), and a "thesaurus" function (finds words with similar or opposite meanings). Other common features include collaborative editing, comments and annotations, support for images and diagrams and internal cross-referencing.

Word processors can be distinguished from several other, related forms of software:

Text editors (modern examples of which include Notepad, BBEdit, Kate, Gedit), were the precursors of word processors. While offering facilities for composing and editing text, they do not format documents. This can be done by batch document processing systems, starting with TJ-2 and RUNOFF and still available in such systems as LaTeX (as well as programs that implement the paged-media extensions to HTML and CSS). Text editors are now used mainly by programmers, website designers, computer system administrators, and, in the case of LaTeX by mathematicians and scientists (for complex formulas and for citations in rare languages). They are also useful when fast startup times, small file sizes, editing speed and simplicity of operation are preferred over formatting.

Later desktop publishing programs were specifically designed to allow elaborate layout for publication, but often offered only limited support for editing. Typically, desktop publishing programs allowed users to import text that was written using a text editor or word processor.

Almost all word processors enable users to employ styles, which are used to automate consistent formatting of text body, titles, subtitles, highlighted text, and so on.

Styles greatly simplify managing the formatting of large documents, since changing a style automatically changes all text that the style has been applied to. Even in shorter documents styles can save a lot of time while formatting. However, most help files refer to styles as an 'advanced feature' of the word processor, which often discourages users from using styles regularly.

Document statistics

Most current word processors can calculate various statistics pertaining to a document. These usually include:

• Character count, word count, sentence count, line count, paragraph count, page count.

• Word, sentence and paragraph length.

• Editing time.

Errors are common; for instance, a dash surrounded by spaces — like either of these — may be counted as a word.

Typical usage

Word processors have a variety of uses and applications within the business world, home, and education.

Business

Within the business world, word processors are extremely useful tools. Typical uses include:

• legal copies

• letters and letterhead

• memos

• reference documents

Businesses tend to have their own format and style for any of these. Thus, versatile word processors with layout editing and similar capabilities find widespread use in most businesses.

Education

Many schools have begun to teach typing and word processing to their students, starting as early as elementary school. Typically these skills are developed throughout secondary school in preparation for the business world. Undergraduate students typically spend many hours writing essays. Graduate and doctoral students continue this trend, as well as creating works for research and publication.

Home

While many homes have word processors on their computers, word processing in the home tends to be educational, planning or business related, dealing with assignments or work being completed at home, or occasionally recreational, e.g. writing short stories. Some use word processors for letter writing, résumé creation, and card creation. However, many of these home publishing processes have been taken over by desktop publishing programs specifically oriented toward home use. which are better suited to these types of documents.

History

Toshiba JW-10, the first word processor for the Japanese language (1971-1978 IEEE milestones)

 Examples of standalone word processor typefaces c. 1980-1981

Brother WP-1400D editing electronic typewriter (1994)

The term word processing was invented by IBM in the late 1960s. By 1971 it was recognized by the New York Times as a "buzz word".[3] A 1974 Times article referred to "the brave new world of Word Processing or W/P. That's International Business Machines talk... I.B.M. introduced W/P about five years ago for its Magnetic Tape Selectric Typewriter and other electronic razzle-dazzle."

IBM defined the term in a broad and vague way as "the combination of people, procedures, and equipment which transforms ideas into printed communications," and originally used it to include dictating machines and ordinary, manually-operated Selectric typewriters. By the early seventies, however, the term was generally understood to mean semiautomated typewriters affording at least some form of electronic editing and correction, and the ability to produce perfect "originals." Thus, the Times headlined a 1974 Xerox product as a "speedier electronic typewriter", but went on to describe the product, which had no screen, as "a word processor rather than strictly a typewriter, in that it stores copy on magnetic tape or magnetic cards for retyping, corrections, and subsequent printout."

Electromechanical paper-tape-based equipment such as the Friden Flexowriter had long been available; the Flexowriter allowed for operations such as repetitive typing of form letters (with a pause for the operator to manually type in the variable information)[8], and when equipped with an auxiliary reader, could perform an early version of "mail merge". Circa 1970 it began to be feasible to apply electronic computers to office automation tasks. IBM's Mag Tape Selectric Typewriter (MTST) and later Mag Card Selectric (MCST) were early devices of this kind, which allowed editing, simple revision, and repetitive typing, with a one-line display for editing single lines.

The New York Times, reporting on a 1971 business equipment trade show, said

The "buzz word" for this year's show was "word processing," or the use of electronic equipment, such as typewriters; procedures and trained personnel to maximize office efficiency. At the IBM exhibition a girl [sic] typed on an electronic typewriter. The copy was received on a magnetic tape cassette which accepted corrections, deletions, and additions and then produced a perfect letter for the boss's signature....

In 1971, a third of all working women in the United States were secretaries, and they could see that word processing would have an impact on their careers. Some manufacturers, according to a Times article, urged that "the concept of 'word processing' could be the answer to Women's Lib advocates' prayers. Word processing will replace the 'traditional' secretary and give women new administrative roles in business and industry."

The 1970s word processing concept did not refer merely to equipment, but, explicitly, to the use of equipment for "breaking down secretarial labor into distinct components, with some staff members handling typing exclusively while others supply administrative support. A typical operation would leave most executives without private secretaries. Instead one secretary would perform various administrative tasks for three or more secretaries." A 1971 article said that "Some [secretaries] see W/P as a career ladder into management; others see it as a dead-end into the automated ghetto; others predict it will lead straight to the picket line." The National Secretaries Association, which defined secretaries as people who "can assume responsibility without direct supervision," feared that W/P would transform secretaries into "space-age typing pools." The article considered only the organizational changes resulting from secretaries operating word processors rather than typewriters; the possibility that word processors might result in managers creating documents without the intervention of secretaries was not considered—not surprising in an era when few but secretaries possessed keyboarding skills.

In the early 1970s, computer scientist Harold Koplow was hired by Wang Laboratories to program calculators. One of his programs permitted a Wang calculator to interface with an IBM Selectric typewriter, which was at the time used to calculate and print the paperwork for auto sales.

In 1974, Koplow's interface program was developed into the Wang 1200 Word Processor, an IBM Selectric-based text-storage device. The operator of this machine typed text on a conventional IBM Selectric; when the Return key was pressed, the line of text was stored on a cassette tape. One cassette held roughly 20 pages of text, and could be "played back" (i.e., the text retrieved) by printing the contents on continuous-form paper in the 1200 typewriter's "print" mode. The stored text could also be edited, using keys on a simple, six-key array. Basic editing functions included Insert, Delete, Skip (character, line), and so on.

The labor and cost savings of this device were immediate, and remarkable: pages of text no longer had to be retyped to correct simple errors, and projects could be worked on, stored, and then retrieved for use later on. The rudimentary Wang 1200 machine was the precursor of the Wang Office Information System (OIS), introduced in 1976, whose CRT-based system was a major breakthrough in word processing technology. It displayed text on a CRT screen, and incorporated virtually every fundamental characteristic of word processors as we know them today. It was a true office machine, affordable by organizations such as medium-sized law firms, and easily learned and operated by secretarial staff.

The Wang was not the first CRT-based machine nor were all of its innovations unique to Wang. In the early 1970s Linolex, Lexitron and Vydec introduced pioneering word-processing systems with CRT display editing. A Canadian electronics company, Automatic Electronic Systems, had introduced a product with similarities to Wang's product in 1973, but went into bankruptcy a year later. In 1976, refinanced by the Canada Development Corporation, it returned to operation as AES Data, and went on to successfully market its brand of word processors worldwide until its demise in the mid-1980s. Its first office product, the AES-90, combined for the first time a CRT-screen, a floppy-disk and a microprocessor,[citation needed] that is, the very same winning combination that would be used by IBM for its PC seven years later. The AES-90 software was able to handle French and English typing from the start, displaying and printing the texts side-by-side, a Canadian government requirement. The first eight units were delivered to the office of the then Prime Minister, Pierre Elliot Trudeau, in February 1974. Despite these predecessors, Wang's product was a standout, and by 1978 it had sold more of these systems than any other vendor.

In the early 1980's, AES Data Inc. introduced a networked word processor system, called MULTIPLUS, offering multi-tasking and up to 8 workstations all sharing the resources of a centralized computer system, a precursor to today's networks. It followed with the introduction of SuperPlus and SuperPlus IV systems which also offered the CP/M operating system answering client needs. AES Data word processors were placed side-by-side with CP/M software, like Wordstar, to highlight ease of use.

The phrase "word processor" rapidly came to refer to CRT-based machines similar to Wang's. Numerous machines of this kind emerged, typically marketed by traditional office-equipment companies such as IBM, Lanier (marketing AES Data machines, re-badged), CPT, and NBI.[13] All were specialized, dedicated, proprietary systems, with prices in the $10,000 ballpark. Cheap general-purpose computers were still the domain of hobbyists.

Some of the earliest CRT-based machines used cassette tapes for removable-memory storage until floppy diskettes became available for this purpose - first the 8-inch floppy, then the 5-1/4-inch (drives by Shugart Associates and diskettes by Dysan).

Printing of documents was initially accomplished using IBM Selectric typewriters modified for ASCII-character input. These were later replaced by application-specific daisy wheel printers (Diablo, which became a Xerox company, and Qume -- both now defunct.) For quicker "draft" printing, dot-matrix line printers were optional alternatives with some word processors.

With the rise of personal computers, and in particular the IBM PC and PC compatibles, software-based word processors running on general-purpose commodity hardware gradually displaced dedicated word processors, and the term came to refer to software rather than hardware. Some programs were modeled after particular dedicated WP hardware. MultiMate, for example, was written for an insurance company that had hundreds of typists using Wang systems, and spread from there to other Wang customers. To adapt to the smaller PC keyboard, MultiMate used stick-on labels and a large plastic clip-on template to remind users of its dozens of Wang-like functions, using the shift, alt and ctrl keys with the 10 IBM function keys and many of the alphabet keys.

Other early word-processing software required users to memorize semi-mnemonic key combinations rather than pressing keys labelled "copy" or "bold." (In fact, many early PCs lacked cursor keys; WordStar famously used the E-S-D-X-centered "diamond" for cursor navigation, and modern vi-like editors encourage use of hjkl for navigation.) However, the price differences between dedicated word processors and general-purpose PCs, and the value added to the latter by software such as VisiCalc, were so compelling that personal computers and word processing software soon became serious competition for the dedicated machines. Word Perfect, XyWrite, Microsoft Word, Wordstar, Workwriter and dozens of other word processing software brands competed in the 1980s. Development of higher-resolution monitors allowed them to provide limited WYSIWYG - What You See Is What You Get, to the extent that typographical features like bold and italics, indentation, justification and margins were approximated on screen.

The mid-to-late 1980s saw the spread of laser printers, a "typographic" approach to word processing, and of true WYSIWYG bitmap displays with multiple fonts (pioneered by the Xerox Alto computer and Bravo word processing program), PostScript, and graphical user interfaces (another Xerox PARC innovation, with the Gypsy word processor which was commercialised in the Xerox Star product range). Standalone word processors adapted by getting smaller and replacing their CRTs with small character-oriented LCD displays. Some models also had computer-like features such as floppy disk drives and the ability to output to an external printer. They also got a name change, now being called "electronic typewriters" and typically occupying a lower end of the market, selling for under $200 USD.

MacWrite, Microsoft Word and other word processing programs for the bit-mapped Apple Macintosh screen, introduced in 1984, were probably the first true WYSIWYG word processors to become known to many people until the introduction of Microsoft Windows. Dedicated word processors eventually became museum pieces.

http://en.wikipedia.org/wiki/Word_processor

 

The Central Processing Unit

The Central Processing Unit (CPU) is the brain of the computer--it is the 'compute' in computer. Modern CPU's are what are called 'integrated chips'. The idea of an integrated chip is that several processing components are integrated into a single piece of silicon. Without the CPU, you have no computer. The CPU is composed of thousands (and soon billions) of transistors.

Each transistor is a set of inputs and one output. When one or more of the inputs receive electricity, the combined charge changes the state of the transistor internally and you get a result out the other side. This simple effect of the transistor is what makes it possible for the computer to count and perform logical operations, all of which we call processing.

A modern computer's CPU usually contains an execution core with two or more instruction pipelines, a data and address bus, a dedicated arithmetic logic unit (ALU, also called the math co-processor), and in some cases special high-speed memory for caching program instructions from RAM.

The CPU's in most PC's and servers are general purpose integrated chips composed of several smaller dedicated-purpose components which together create the processing capabilities of the modern computer.

For example, Intel makes a Pentium, while AMD makes the Athlon, and Duron (no memory cache).

Generations

CPU manufacturers engineer new ways to do processing that requires some significant re-engineering of the current chip design. When they create this new design that changes the number of bits the chip can handle, or some other major way in which the chip performs its job, they are creating a new generation of processors. As of the time this tutorial was last updated (2008), there were seven generations of chips, with an eighth on the drawing board.

CPU Components

A lot of components go into building a modern computer processor and just what goes in changes with every generation as engineers and scientists find new, more efficient ways to do old tasks.

• Execution Core(s)

• Data Bus

• Address Bus

• Math Co-processor

• Instruction sets / Microcode

• Multimedia extensions

• Registers

• Flags

• Pipelining

• Memory Controller

• Cache Memory (L1, L2 and L3)

Measuring Speed: Bits, Cycles and Execution Cores

Bit Width

The first way of describing a processor is to say how many bits it processes in a single instruction or transports across the processor's internal bus in a single cycle (not exactly correct, but close enough). The number of bits used in the CPU's instructions and registers and how many bits the buses can transfer simultaneously is usually expressed in multiples of 8 bits. It is possible for the registers and the bus to have different sizes. Current chip designs are 64 bit chips (as of 2008).

More bits usually means more processing capability and more speed.

Clock Cycles

The second way of describing a processor is to say how many cycles per second the chip operates at. This is how many times per second a charge of electricity passes through the chip. The more cycles, the faster the processor. Currently, chips operate in the billions of cycles per second range. When you're talking about billions of anything in computer terms, you're talking about 'giga' something. When you're talking about how many cycles per second, your talking about 'hertz'. Putting the two together, you get gigahertz.

More clock cycles usually means more processing capability and more speed.

Execution Cores

The third way of describing a processor is to say how many execution cores are in the chip. The most advanced chips today have eight execution cores. More execution cores means you can get more work done at the same time, but it doesn't necessarily mean a single program will run faster. To put it another way, a processor with one execution core might be able to run your MP3 music, your web browser, a graphics program and that's about where it starts to slow down enough, it's not worth it running more programs. A system with a processor with 8 cores could run all that plus ten more applications without even seeming to slow down (of course, this assumes you have enough RAM to load all of this software at the same time).

More execution cores means more processing capability, but not necessarily more speed.

As of 2008, the most advanced processors available are 64-bit processors with 8 cores, running as fast as 3-4 gigahertz. Intel has released quad-core 64-bit chips as has AMD.

Multi-Processor Computers

And if you're still needing more processing power, some computers are designed to run more than one processor chip at the same time. Many companies that manufacture servers make models that accept two, four, eight, sixteen even thirty two processors in a single chassis. The biggest supercomputers are running hundreds of thousands of quad-core processors in parallel to do major calculations for such applications as thermonuclear weapons simulations, radioactive decay simulations, weather simulations, high energy physics calculations and more.

CPU Speed Measurements

The main measurement quoted by manufacturers as a supposed indication of processing speed, is the clock speed of the chip measured in hertz. The the theory goes that the higher the number of mega or gigahertz, the faster the processor.

However comparing raw speeds is not always a good comparison between chips. Counting how many instructions are processed per second (MIPS, BIPS, TIPS for millions, billions and trillions of instructions per second) is a better measurement. Still others use the number of mathematical calculations per second to rate the speed of a processor.

Of course, what measurement is most important and most helpful to you depends on what you use a computer for. If you primarily do intensive math calculations, measuring the number of calculations per second is most important. If you are measuring how fast the computer runs an application, then instructions per second are most important.

Processor Manufacturers

• American Micro Devices (AMD)

• Intel

• IBM

• Motorola

• Cyrix

• Texas Instruments

AMD and Intel have pretty much dominated the market. AMD and Intel are for IBM compatible machines. Motorola chips are made for MacIntoshes. Cyrix (another IBM compatible chip maker) runs a distant fourth place in terms of number of chips sold.

Today all chip manufacturers produce chips whose input and output are identical, though the internal architecture may be different. This means that though they may not be built the same way, they DO all run the same software.

The CPU is built using logic gates, and contains a small number of programs called 'microcode' built into the chip to perform certain basic processes (like reading data from the bus and writing to a device). Current chips use a 'reduced instruction set' or RISC architectures. Chips can also be measured in terms of instructions processed per second (MIPS).

Symbols, Instructions and Microcode

• Symbols

• Instructions

• Microcode

                                           

                             Tutorials

Basic Concepts

If you are new to the Information Technologies arena, and wanting to get in, you should read through the Theory section first before moving on to other sections. Each section introduces a number of basic concepts that are central to understanding the technology Tutorials elsewhere in this site.

This Theory section is generally non-technical, and should not be memorized. It is presented here for you, the reader to give you a broad set of concepts used to discuss the technologies in the tutorial sections. You only need to be familliar with the concepts below.

Network Models

There are two theoretical models of networking, the TCP/IP Model and the OSI Model.

The TCP/IP Model directly reflects how the Internet and TCP/IP based networks work. As the smaller, less complex model it is easier to learn. However, engineers more frequently refer to the OSI Model.

The OSI Model is a model of how networks should work. It is theory only, and not a hard science. It is a model designed to make it easier to categorize different technologies and protocols so that their operation can be more easilly described and understood.

Both the TCP/IP Model and the OSI Model are part of the Cisco CCNA certification exam, so you have to know them both.

NUMBER SYSTEMS

Number systems are used to indicate quantities and values and there is more than one number system. The Information Technology field uses four numbering systems frequently: binary, octal, decimal and hexadecimal. To understand these, you have to understand why we use numbers and how these number systems work. If you are going to be in the IT field, understanding these numbers is basic to any study of almost any topic. Computers think in binary and represent their data in hexadecimal and octal. We humans use decimal. Until you know how to convert from one to the other, and understand them all, a lot of things are going to be a LOT harder.

Communications

SIGNALLING

This is the basics of generating a signal of nearly any type. Signal are used to carry data, and understanding basic signalling techniques help provide an understanding of essential physical layer protocols.

MODULATION

Modulation is how we get data 'encoded' into a signal. Modulation is the process of altering a signal in specific ways so that the signal can indicate a change of state that matches the flow of zeroes and ones in the data stream.

Quantization

Quantization is the process of converting analog information to digital information. Quantization is used for storing music in digital format, for transmitting data across networks and transmitting voice, video and audio across digital networks.

Transmission

The process of transmitting information can be simple on/off signals, or more complex mixed radio frequencies. The goal may be the same but the process always differs according to the method of communication used.

 

                                            Computer

A computer is any device capable of making calculations--it performs a computation and produces an answer. But this is a little too broad to describe most computers today. Today's computers can perform very complex calculations at extremely high speeds.

The computers most people think of, their desktop computers, their laptops or notebook computers are 'general purpose' machines. All general purpose computers have a place to store instructions (a program), registers to store intermediate values created during computation and a computation engine to perform logical or mathematical calculations. Increasingly, computers are being integrated with everything you can think of, from your car, to your audio system, your appliances, your phone, virtually everything has a computer inside it these days.

Modern computers are a combination of firmware, hardware and software. This tutorial will focus on the main components in every computer, the different kinds of computer hardware, computer software and computer firmware.

What is the Internet?

The Internet is the worldwide super-network of publicly accessible computers. The Internet is composed of thousands of networks all around the world that have agreed to connect and communicate with each other and transfer data between each other. There is no International Law mandating the Internet's existence nor any body solely responsible for governing or policing the Internet. A wide range of Internet tutorials are provided here, covering questions such as "What is the Internet?" and "How does the Internet work?" and the history of the Internet as well as the majority of networking technologies, network protocols and communications protocols used by the Internet, as well as a bit of history of the Internet and some background.

I highly recommended that you read the Theory section of this site, especially the OSI Model, and Binary sections before beginning the Internet tutorials, as they will help you get the most out of these tutorials about the Internet.

How Does the Internet Work?

What is the Internet?

The Internet is the worldwide super-network of publicly accessible computers. The Internet is composed of thousands of networks all around the world that have agreed to connect and communicate with each other and transfer data between each other. There is no International Law mandating the Internet's existence nor any body solely responsible for governing or policing the Internet. A wide range of Internet tutorials are provided here, covering questions such as "What is the Internet?" and "How does the Internet work?" and the history of the Internet as well as the majority of networking technologies, network protocols and communications protocols used by the Internet, as well as a bit of history of the Internet and some background.

I highly recommended that you read the Theory section of this site, especially the OSI Model, and Binary sections before beginning the Internet tutorials, as they will help you get the most out of these tutorials about the Internet.

 

Definition of a Network

The word network is used generally to mean a set of computers that are connected together in such a way as to permit them to communicate and share information. The word network can have various more specific meanings depending on the context in which the word is used. The Internet is a network, so is the LAN at the office or your school. Speaking in the most general terms, the networks at your school, work and the Internet are networks, but they aren't necessarilly part of the same network. What is and is not part of a network is often defined by who owns and operates the equipment and the computers that are part of the network. Thus, your school's network is separate from the Internet.

You know you have a network when you have two or more computers connected together and they are able to communicate. Plugged into the back of each computer (end station) are some sort of communications port. Most desktop computers today have serial ports, parallel ports, ethernet ports, modem ports, firewire ports, USB ports and more. All of these ports have been used in one way or another to connect computers to a network.

Xerox was the first company to start research and development on networks. They knew their printers were expensive and users were only able to print from one big computer (a mainframe) attached to the printer directly. Xerox decided that they could sell more printers if they could make it possible for anyone to use the printer from any computer if there was a communications link from all computers to the printer, so Xerox put Bob Metcalf and others to work on researching and designing what eventually came to be called ethernet.

Hosts, End Stations and Workstations

When people talk about networks, they often refer to computers that are at the edge of the network as hosts, end stations or workstations. Its all just the same thing, a computer attached to the network; though the word HOST has the most general meaning and can include anything attached to the network including hubs, bridges, switches, routers, access points, firewalls, workstations, servers, mainframes, printers, scanners, copiers, fax machines and more!

Just about everything electronic that has a processor and which you would use in an office is 'network capable' today and lots of things that aren't currently networked probably will be networked in the future. Yes, in many offices the phone system already IS the network (Voice over IP).

LAN, MAN, WAN and er.. IPAN??

There are some terms that are used to describe the size and scope of a network: LAN, WAN, MAN. We've added our own term 'IPAN'

A Local Area Network (LAN) is usually a single set of connected computers that are in a single small location such as a room, a floor of a building, or the whole building.

A Metropolitan Area Network (MAN) is a network that encompasses a city or town. It is usually multiple point-to-point fiber-optic connections put together by a communications company and leased to their customers, but a small number of big corporations have built a few of these of their own and opened them to the local companies with which they do business. The automotive, travel and insurance industries are just a few examples of who has built a WAN.

A Wide Area Network (WAN) is usually composed of all the links that connect the buildings of a campus together, such as at a University or at a corporate headquarters. WAN connections can often span miles, so you frequently hear peole referring to the 'WAN' connection to an office half way around the world. Usually, what distinguishes a WAN from a LAN is that there are one or more links that span a large distance over serial, T-carrier or ISDN, Frame Relay or ATM links.

So what the heck is an IPAN? An Inter-Planetary Area Network. The mechanical rovers Spirit and Opportunity on the planet Mars, have IP addresses on a NASA network and NASA uses Internet protocols to communicate with the rover (probably UDP). While the communication with the Spirit rover doesn't actually get transmitted over the Internet, the NASA network does have hosts spanning between earth and Mars.

 Physical Network Topologies

The hardware used to build the network will usually require that the structure of the network conform to a certain design. The word topology is used to describe what the network looks like when drawn on paper and to a large extent, how it operates.

Bus Topology

A bus topology connects all computers together using a single wire, usually a piece of coaxial cable, that passes electricity over a copper core that all devices transmit and receive from. All devices hear all communication over the bus.

Ring Topology

A ring topology usually involves connecting one or more computers together using paired physical interfaces. One interface is the clockwise side of the ring, the other connection is the counter-clockwise side of the ring. Devices connected to the ring can transmit and receive, but there is usually some other sort of method for controlling access to the common network hardware. Token Ring uses a ring topology as does CDDI and FDDI. All three of these network technologies use a token-passing scheme in which the computer holding the the token is allowed to transmit.

Star Topology

A star topology is the most common network topology in use today. All devices in the network are connected to a single hub or repeater. The connected devices radiate outward from the hub like an asterisk '*' or star.

Hub and Spoke Topology

Hub and spoke is another term often used to describe a star topology.

Point to Point Topology (Daisy Chaining)

A point-to-point topology is most often a communications connection between two devices over a single hardware connection that is not shared by any other devices. There will be exactly two and only two devices on the connection. Networks using point-to-point topologies can be daisy-chained together to form an end-to-end communications path.

Point to Multipoint

A single connection point on the network has network segments that run to several other points.

Logical Network Topologies

Peer-to-Peer

A peer-to-peer network is composed of two or more self-sufficient computers. Each computer handles all functions, logging in, storage, providing a user interface etc. The computers on a peer-to-peer network can communicate, but do not need the resources or services available from the other computers on the network. Peer-to-peer is the opposite of the client-server logical network model.

A Microsoft Windows Workgroup is one example of a peer-to-peer network. UNIX servers running as stand-alone systems are also a peer-to-peer network. Logins, services and files are local to the computer. You can only access resources on other peer computers if you have logins on the peer computers.

Client - Server

The simplest client-server network is composed of a server and one or more clients. The server provides a service that the client computer needs. Clients connect to the server across the network in order to access the service. A server can be a piece of software running on a computer, or it can be the computer itself.

One of the simplest examples of client-server is a File Transfer Protocol (FTP) session. File Transfer Protocol (FTP) is a protocol and service that allows your computer to get or put files to a second computer using a network connection. A computer running FTP software opens a session to an FTP server to download or upload a file. The FTP server is providing file storage services over the network. Because it is providing file storage services, it is said to be a 'file server'. A client software application is required to access the FTP service running on the file server.

Most computer networks today control logins on all machines from a centralized logon server. When you sit down to a computer and type in your username and password, your username and password are sent by the computer to the logon server. UNIX servers use NIS, NIS+ or LDAP to provide these login services. Microsoft Windows comptuers use Active Directory and Windows Logon and/or an LDAP client.

Users on a client-server network will usually only need one login to access resources on the network.

Distributed Services

Computer networks using distributed services provide those services to client computers, but not from a centralized server. The services are running on more than one computer and some or all of the functions provided by the service are provided by more than one server.

The simplest example of a distributed service is Domain Name Service (DNS) which performs the function of turning human-understandable names into computer numbers called IP addresses. Whenever you browse a web page, your computer uses DNS. Your computer sends a DNS request to your local DNS server. That local server will then go to a remote server on the Internet called a "DNS Root Server" to begin the lookup process. This Root Server will then direct your local DNS server to the owner of the domain name the website is a part of. Thus, there are at least three DNS servers involved in the process of finding and providing the IP address of the website you intended to browse. Your local DNS server provides the query functions and asks other servers for information. The Root DNS server tells your local DNS server where to find an answer. The DNS server that 'owns' the domain of the website you are trying to browse tells your local DNS server the correct IP address. Your computer stores that IP address in its own local DNS cache. Thus, DNS is a distributed service that runs everywhere, but no one computer can do the job by itself.

 Communication Methods

1. Point-to-point

2. Broadcast, multiple access

3. Broadcast, non-multiple access

4. Point-to-multipoint

Network Technologies

1. Repeater (Hub based)

2. Bridging (Bridge based)

3. Switching (Switch based)

4. Routing (multiple networks)

 

 http://www.inetdaemon.com/tutorials/computers/hardware/cpu/

                             

                                   Computer programming

Computer programming is a field that has to do with the analytical creation of source code that can be used to configure computer systems. Computer programmers may choose to function in a broad range of programming functions, or specialize in some aspect of development, support, or maintenance of computers for the home or workplace. Programmers provide the basis for the creation and ongoing function of the systems that many people rely upon for all sorts of information exchange, both business related and for entertainment purposes.

The computer programmer often focuses on the development of software that allows people to perform a broad range of functions. All online functions that are utilized in the home and office owe their origins to a programmer or group of programmers. Computer operating systems, office suites, word processing programs, and even Internet dialing software all exist because of the work of programmers.

Computer programming goes beyond software development. The profession also extends to the adaptation of software for internal use, and the insertion of code that allows a program to be modified for a function that is unique to a given environment. When this is the case, the computer programmer may be employed with a company that wishes to use existing software as the foundation for a customized platform that will be utilized as part of the company intranet.

A third aspect of computer programming is the ongoing maintenance of software that is currently running as part of a network. Here, the programmer may work hand in hand with other information technology specialists to identify issues with current programs, and take steps to adapt or rewrite sections of code in order to correct a problem or enhance a function in some manner.

Related topics

Computer

Computer Components

Cheap Computer

Computer Hardware

Buy Computer

Computer Parts

Used Computer

In short, computer programming is all about developing, adapting, and maintaining all the programs that many of us rely upon for both work and play. Programmers are constantly in demand for all of these three functions, since businesses and individuals are always looking for new and better ways to make use of computer technology for all sorts of tasks. With this in mind, computer programming is a very stable profession to enter, and provides many different possibilities of employment opportunities.

 

http://www.wisegeek.com/what-is-computer-programming.htm

                                      Computer programming

Computer programming (often shortened to programming or coding) is the process of designing, writing, testing, debugging / troubleshooting, and maintaining the source code of computer programs. This source code is written in a programming language. The code may be a modification of an existing source or something completely new. The purpose of programming is to create a program that exhibits a certain desired behaviour (customization). The process of writing source code often requires expertise in many different subjects, including knowledge of the application domain, specialized algorithms and formal logic.

Overview

Within software engineering, programming (the implementation) is regarded as one phase in a software development process.

There is an ongoing debate on the extent to which the writing of programs is an art, a craft or an engineering discipline. In general, good programming is considered to be the measured application of all three, with the goal of producing an efficient and evolvable software solution (the criteria for "efficient" and "evolvable" vary considerably). The discipline differs from many other technical professions in that programmers, in general, do not need to be licensed or pass any standardized (or governmentally regulated) certification tests in order to call themselves "programmers" or even "software engineers." However, representing oneself as a "Professional Software Engineer" without a license from an accredited institution is illegal in many parts of the world.[citation needed] However, because the discipline covers many areas, which may or may not include critical applications, it is debatable whether licensing is required for the profession as a whole. In most cases, the discipline is self-governed by the entities which require the programming, and sometimes very strict environments are defined (e.g. United States Air Force use of AdaCore and security clearance).

Another ongoing debate is the extent to which the programming language used in writing computer programs affects the form that the final program takes. This debate is analogous to that surrounding the Sapir-Whorf hypothesis in linguistics, that postulates that a particular language's nature influences the habitual thought of its speakers. Different language patterns yield different patterns of thought. This idea challenges the possibility of representing the world perfectly with language, because it acknowledges that the mechanisms of any language condition the thoughts of its speaker community.

Said another way, programming is the craft of transforming requirements into something that a computer can execute.

History of programming

Wired plug board for an IBM 402 Accounting Machine.

The concept of devices that operate following a pre-defined set of instructions traces back to Greek Mythology, notably Hephaestus, the Greek Blacksmith God, and his mechanical slaves. The Antikythera mechanism from ancient Greece was a calculator utilizing gears of various sizes and configuration to determine its operation. Al-Jazari built programmable Automata in 1206. One system employed in these devices was the use of pegs and cams placed into a wooden drum at specific locations. which would sequentially trigger levers that in turn operated percussion instruments. The output of this device was a small drummer playing various rhythms and drum patterns. The Jacquard Loom, which Joseph Marie Jacquard developed in 1801, uses a series of pasteboard cards with holes punched in them. The hole pattern represented the pattern that the loom had to follow in weaving cloth. The loom could produce entirely different weaves using different sets of cards. Charles Babbage adopted the use of punched cards around 1830 to control his Analytical Engine. The synthesis of numerical calculation, predetermined operation and output, along with a way to organize and input instructions in a manner relatively easy for humans to conceive and produce, led to the modern development of computer programming. Development of computer programming accelerated through the Industrial Revolution.

In the late 1880s, Herman Hollerith invented the recording of data on a medium that could then be read by a machine. Prior uses of machine readable media, above, had been for control, not data. "After some initial trials with paper tape, he settled on punched cards..." To process these punched cards, first known as "Hollerith cards" he invented the tabulator, and the keypunch machines. These three inventions were the foundation of the modern information processing industry. In 1896 he founded the Tabulating Machine Company (which later became the core of IBM). The addition of a control panel (plugboard) to his 1906 Type I Tabulator allowed it to do different jobs without having to be physically rebuilt. By the late 1940s, there were a variety of plug-board programmable machines, called unit record equipment, to perform data-processing tasks (card reading). Early computer programmers used plug-boards for the variety of complex calculations requested of the newly invented machines.

 

Data and instructions could be stored on external punched cards, which were kept in order and arranged in program decks.

The invention of the von Neumann architecture allowed computer programs to be stored in computer memory. Early programs had to be painstakingly crafted using the instructions (elementary operations) of the particular machine, often in binary notation. Every model of computer would likely use different instructions (machine language) to do the same task. Later, assembly languages were developed that let the programmer specify each instruction in a text format, entering abbreviations for each operation code instead of a number and specifying addresses in symbolic form (e.g., ADD X, TOTAL). Entering a program in assembly language is usually more convenient, faster, and less prone to human error than using machine language, but because an assembly language is little more than a different notation for a machine language, any two machines with different instruction sets also have different assembly languages.

In 1954, FORTRAN was invented; it was the first high level programming language to have a functional implementation, as opposed to just a design on paper. (A high-level language is, in very general terms, any programming language that allows the programmer to write programs in terms that are more abstract than assembly language instructions, i.e. at a level of abstraction "higher" than that of an assembly language.) It allowed programmers to specify calculations by entering a formula directly (e.g. Y = X*2 + 5*X + 9). The program text, or source, is converted into machine instructions using a special program called a compiler, which translates the FORTRAN program into machine language. In fact, the name FORTRAN stands for "Formula Translation". Many other languages were developed, including some for commercial programming, such as COBOL. Programs were mostly still entered using punched cards or paper tape. (See computer programming in the punch card era). By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were developed that allowed changes and corrections to be made much more easily than with punched cards. (Usually, an error in punching a card meant that the card had to be discarded and an new one punched to replace it.)

As time has progressed, computers have made giant leaps in the area of processing power. This has brought about newer programming languages that are more abstracted from the underlying hardware. Although these high-level languages usually incur greater overhead, the increase in speed of modern computers has made the use of these languages much more practical than in the past. These increasingly abstracted languages typically are easier to learn and allow the programmer to develop applications much more efficiently and with less source code. However, high-level languages are still impractical for a few programs, such as those where low-level hardware control is necessary or where maximum processing speed is vital.

Throughout the second half of the twentieth century, programming was an attractive career in most developed countries. Some forms of programming have been increasingly subject to offshore outsourcing (importing software and services from other countries, usually at a lower wage), making programming career decisions in developed countries more complicated, while increasing economic opportunities in less developed areas. It is unclear how far this trend will continue and how deeply it will impact programmer wages and opportunities.

Modern programming

Quality requirements

Whatever the approach to software development may be, the final program must satisfy some fundamental properties. The following properties are among the most relevant:

• Efficiency/performance: the amount of system resources a program consumes (processor time, memory space, slow devices such as disks, network bandwidth and to some extent even user interaction): the less, the better. This also includes correct disposal of some resources, such as cleaning up temporary files and lack of memory leaks.

• Reliability: how often the results of a program are correct. This depends on conceptual correctness of algorithms, and minimization of programming mistakes, such as mistakes in resource management (e.g., buffer overflows and race conditions) and logic errors (such as division by zero or off-by-one errors).

• Robustness: how well a program anticipates problems not due to programmer error. This includes situations such as incorrect, inappropriate or corrupt data, unavailability of needed resources such as memory, operating system services and network connections, and user error.

• Usability: the ergonomics of a program: the ease with which a person can use the program for its intended purpose, or in some cases even unanticipated purposes. Such issues can make or break its success even regardless of other issues. This involves a wide range of textual, graphical and sometimes hardware elements that improve the clarity, intuitiveness, cohesiveness and completeness of a program's user interface.

• Portability: the range of computer hardware and operating system platforms on which the source code of a program can be compiled/interpreted and run. This depends on differences in the programming facilities provided by the different platforms, including hardware and operating system resources, expected behaviour of the hardware and operating system, and availability of platform specific compilers (and sometimes libraries) for the language of the source code.

• Maintainability: the ease with which a program can be modified by its present or future developers in order to make improvements or customizations, fix bugs and security holes, or adapt it to new environments. Good practices during initial development make the difference in this regard. This quality may not be directly apparent to the end user but it can significantly affect the fate of a program over the long term.

Algorithmic complexity

The academic field and the engineering practice of computer programming are both largely concerned with discovering and implementing the most efficient algorithms for a given class of problem. For this purpose, algorithms are classified into orders using so-called Big O notation, O(n), which expresses resource use, such as execution time or memory consumption, in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to the circumstances.

Methodologies

The first step in most formal software development projects is requirements analysis, followed by testing to determine value modeling, implementation, and failure elimination (debugging). There exist a lot of differing approaches for each of those tasks. One approach popular for requirements analysis is Use Case analysis.

Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both the OOAD and MDA.

A similar technique used for database design is Entity-Relationship Modeling (ER Modeling).

Implementation techniques include imperative languages (object-oriented or procedural), functional languages, and logic languages.

Measuring language usage

It is very difficult to determine what are the most popular of modern programming languages. Some languages are very popular for particular kinds of applications (e.g., COBOL is still strong in the corporate data center, often on large mainframes, FORTRAN in engineering applications, scripting languages in web development, and C in embedded applications), while some languages are regularly used to write many different kinds of applications.

Methods of measuring programming language popularity include: counting the number of job advertisements that mention the language, the number of books teaching the language that are sold (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL).

Debugging

A bug, which was debugged in 1947.

Debugging is a very important task in the software development process, because an incorrect program can have significant consequences for its users. Some languages are more prone to some kinds of faults because their specification does not require compilers to perform as much checking as other languages. Use of a static analysis tool can help detect some possible problems.

Debugging is often done with IDEs like Visual Studio, NetBeans, and Eclipse. Standalone debuggers like gdb are also used, and these often provide less of a visual environment, usually using a command line.

Programming languages

Different programming languages support different styles of programming (called programming paradigms). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute.

Allen Downey, in his book How To Think Like A Computer Scientist, writes:

The details look different in different languages, but a few basic instructions appear in just about every language:

• input: Get data from the keyboard, a file, or some other device.

• output: Display data on the screen or send data to a file or other device.

• arithmetic: Perform basic arithmetical operations like addition and multiplication.

• conditional execution: Check for certain conditions and execute the appropriate sequence of statements.

• repetition: Perform some action repeatedly, usually with some variation.

Many computer languages provide a mechanism to call functions provided by libraries. Provided the functions in a library follow the appropriate run time conventions (e.g., method of passing arguments), then these functions may be written in any other language.

Programmers

Computer programmers are those who write computer software. Their jobs usually involve:

• Coding

• Compilation

• Documentation

• Integration

• Maintenance

• Requirements analysis

• Software architecture

• Software testing

• Specification

• Debugging

 http://en.wikipedia.org/wiki/Computer_programming

 

 

                                                  Technology

By the mid 20th century, humans had achieved a mastery of technology sufficient to leave the atmosphere of the Earth for the first time and explore space.

Technology is the usage and knowledge of tools, techniques, crafts, systems or methods of organization. The word technology comes from the Greek technología (τεχνολογία) — téchnē (τέχνη), an 'art', 'skill' or 'craft' and -logía (-λογία), the study of something, or the branch of knowledge of a discipline. The term can either be applied generally or to specific areas: examples include construction technology, medical technology, or state-of-the-art technology or high technology. Technologies can also be exemplified in a material product, for example an object can be termed state of the art.

Technologies significantly affect human as well as other animal species' ability to control and adapt to their natural environments. The human species' use of technology began with the conversion of natural resources into simple tools. The prehistorical discovery of the ability to control fire increased the available sources of food and the invention of the wheel helped humans in travelling in and controlling their environment. Recent technological developments, including the printing press, the telephone, and the Internet, have lessened physical barriers to communication and allowed humans to interact freely on a global scale. However, not all technology has been used for peaceful purposes; the development of weapons of ever-increasing destructive power has progressed throughout history, from clubs to nuclear weapons.

Technology has affected society and its surroundings in a number of ways. In many societies, technology has helped develop more advanced economies (including today's global economy) and has allowed the rise of a leisure class. Many technological processes produce unwanted by-products, known as pollution, and deplete natural resources, to the detriment of the Earth and its environment. Various implementations of technology influence the values of a society and new technology often raises new ethical questions. Examples include the rise of the notion of efficiency in terms of human productivity, a term originally applied only to machines, and the challenge of traditional norms.

Philosophical debates have arisen over the present and future use of technology in society, with disagreements over whether technology improves the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar movements criticise the pervasiveness of technology in the modern world, opining that it harms the environment and alienates people; proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition. Indeed, until recently, it was believed that the development of technology was restricted only to human beings, but recent scientific studies indicate that other primates and certain dolphin communities have developed simple tools and learned to pass their knowledge to other generations.

Definition and usage

The invention of the printing press made it possible for scientists and politicians to communicate their ideas with ease, leading to the Age of Enlightenment; an example of technology as a cultural force.

The use of the term technology has changed significantly over the last 200 years. Before the 20th century, the term was uncommon in English, and usually referred to the description or study of the useful arts. The term was often connected to technical education, as in the Massachusetts Institute of Technology (chartered in 1861). "Technology" rose to prominence in the 20th century in connection with the second industrial revolution. The meanings of technology changed in the early 20th century when American social scientists, beginning with Thorstein Veblen, translated ideas from the German concept of Technik into "technology." In German and other European languages, a distinction exists between Technik and Technologie that is absent in English, as both terms are usually translated as "technology." By the 1930s, "technology" referred not to the study of the industrial arts, but to the industrial arts themselves. In 1937, the American sociologist Read Bain wrote that "technology includes all tools, machines, utensils, weapons, instruments, housing, clothing, communicating and transporting devices and the skills by which we produce and use them." Bain's definition remains common among scholars today, especially social scientists. But equally prominent is the definition of technology as applied science, especially among scientists and engineers, although most social scientists who study technology reject this definition. More recently, scholars have borrowed from European philosophers of "technique" to extend the meaning of technology to various forms of instrumental reason, as in Foucault's work on technologies of the self ("techniques de soi").

Dictionaries and scholars have offered a variety of definitions. The Merriam-Webster dictionary offers a definition of the term: "the practical application of knowledge especially in a particular area" and "a capability given by the practical application of knowledge". Ursula Franklin, in her 1989 "Real World of Technology" lecture, gave another definition of the concept; it is "practice, the way we do things around here". The term is often used to imply a specific field of technology, or to refer to high technology or just consumer electronics, rather than technology as a whole. Bernard Stiegler, in Technics and Time, 1, defines technology in two ways: as "the pursuit of life by means other than life", and as "organized inorganic matter."

Technology can be most broadly defined as the entities, both material and immaterial, created by the application of mental and physical effort in order to achieve some value. In this usage, technology refers to tools and machines that may be used to solve real-world problems. It is a far-reaching term that may include simple tools, such as a crowbar or wooden spoon, or more complex machines, such as a space station or particle accelerator. Tools and machines need not be material; virtual technology, such as computer software and business methods, fall under this definition of technology.

The word "technology" can also be used to refer to a collection of techniques. In this context, it is the current state of humanity's knowledge of how to combine resources to produce desired products, to solve problems, fulfill needs, or satisfy wants; it includes technical methods, skills, processes, techniques, tools and raw materials. When combined with another term, such as "medical technology" or "space technology", it refers to the state of the respective field's knowledge and tools. "State-of-the-art technology" refers to the high technology available to humanity in any field.

Technology can be viewed as an activity that forms or changes culture. Additionally, technology is the application of math, science, and the arts for the benefit of life as it is known. A modern example is the rise of communication technology, which has lessened barriers to human interaction and, as a result, has helped spawn new subcultures; the rise of cyberculture has, at its basis, the development of the Internet and the computer. Not all technology enhances culture in a creative way; technology can also help facilitate political oppression and war via tools such as guns. As a cultural activity, technology predates both science and engineering, each of which formalize some aspects of technological endeavor.

 

Science, engineering and technology

The distinction between science, engineering and technology is not always clear. Science is the reasoned investigation or study of phenomena, aimed at discovering enduring principles among elements of the phenomenal world by employing formal techniques such as the scientific method. Technologies are not usually exclusively products of science, because they have to satisfy requirements such as utility, usability and safety.

Engineering is the goal-oriented process of designing and making tools and systems to exploit natural phenomena for practical human means, often (but not always) using results and techniques from science. The development of technology may draw upon many fields of knowledge, including scientific, engineering, mathematical, linguistic, and historical knowledge, to achieve some practical result.

Technology is often a consequence of science and engineering — although technology as a human activity precedes the two fields. For example, science might study the flow of electrons in electrical conductors, by using already-existing tools and knowledge. This new-found knowledge may then be used by engineers to create new tools and machines, such as semiconductors, computers, and other forms of advanced technology. In this sense, scientists and engineers may both be considered technologists; the three fields are often considered as one for the purposes of research and reference.

The exact relations between science and technology in particular have been debated by scientists, historians, and policymakers in the late 20th century, in part because the debate can inform the funding of basic and applied science. In the immediate wake of World War II, for example, in the United States it was widely considered that technology was simply "applied science" and that to fund basic science was to reap technological results in due time. An articulation of this philosophy could be found explicitly in Vannevar Bush's treatise on postwar science policy, Science—The Endless Frontier: "New products, new industries, and more jobs require continuous additions to knowledge of the laws of nature... This essential new knowledge can be obtained only through basic scientific research." In the late-1960s, however, this view came under direct attack, leading towards initiatives to fund science for specific tasks (initiatives resisted by the scientific community). The issue remains contentious—though most analysts resist the model that technology simply is a result of scientific research.

History

Paleolithic (2.5 million – 10,000 BC)

The use of tools by early humans was partly a process of discovery, partly of evolution. Early humans evolved from a race of foraging hominids which were already bipedal, with a brain mass approximately one third that of modern humans. Tool use remained relatively unchanged for most of early human history, but approximately 50,000 years ago, a complex set of behaviors and tool use emerged, believed by many archaeologists to be connected to the emergence of fully modern language.

Human ancestors have been using stone and other tools since long before the emergence of Homo sapiens approximately 200,000 years ago. The earliest methods of stone tool making, known as the Oldowan "industry", date back to at least 2.3 million years ago, with the earliest direct evidence of tool usage found in Ethiopia within the Great Rift Valley, dating back to 2.5 million years ago. This era of stone tool use is called the Paleolithic, or "Old stone age", and spans all of human history up to the development of agriculture approximately 12,000 years ago.

To make a stone tool, a "core" of hard stone with specific flaking properties (such as flint) was struck with a hammerstone. This flaking produced a sharp edge on the core stone as well as on the flakes, either of which could be used as tools, primarily in the form of choppers or scrapers. These tools greatly aided the early humans in their hunter-gatherer lifestyle to perform a variety of tasks including butchering carcasses (and breaking bones to get at the marrow); chopping wood; cracking open nuts; skinning an animal for its hide; and even forming other tools out of softer materials such as bone and wood.

The earliest stone tools were crude, being little more than a fractured rock. In the Acheulian era, beginning approximately 1.65 million years ago, methods of working these stone into specific shapes, such as hand axes emerged. The Middle Paleolithic, approximately 300,000 years ago, saw the introduction of the prepared-core technique, where multiple blades could be rapidly formed from a single core stone. The Upper Paleolithic, beginning approximately 40,000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely.

Fire

The discovery and utilization of fire, a simple energy source with many profound uses, was a turning point in the technological evolution of humankind. The exact date of its discovery is not known; evidence of burnt animal bones at the Cradle of Humankind suggests that the domestication of fire occurred before 1,000,000 BC; scholarly consensus indicates that Homo erectus had controlled fire by between 500,000 BC and 400,000 BC. Fire, fueled with wood and charcoal, allowed early humans to cook their food to increase its digestibility, improving its nutrient value and broadening the number of foods that could be eaten.

Clothing and shelter

Other technological advances made during the Paleolithic era were clothing and shelter; the adoption of both technologies cannot be dated exactly, but they were a key to humanity's progress. As the Paleolithic era progressed, dwellings became more sophisticated and more elaborate; as early as 380,000 BC, humans were constructing temporary wood huts. Clothing, adapted from the fur and hides of hunted animals, helped humanity expand into colder regions; humans began to migrate out of Africa by 200,000 BC and into other continents, such as Eurasia.

Neolithic through Classical Antiquity (10,000BC – 300AD)

An array of Neolithic artifacts, including bracelets, axe heads, chisels, and polishing tools.

Man's technological ascent began in earnest in what is known as the Neolithic period ("New stone age"). The invention of polished stone axes was a major advance because it allowed forest clearance on a large scale to create farms. The discovery of agriculture allowed for the feeding of larger populations, and the transition to a sedentist lifestyle increased the number of children that could be simultaneously raised, as young children no longer needed to be carried, as was the case with the nomadic lifestyle. Additionally, children could contribute labor to the raising of crops more readily than they could to the hunter-gatherer lifestyle.

With this increase in population and availability of labor came an increase in labor specialization. What triggered the progression from early Neolithic villages to the first cities, such as Uruk, and the first civilizations, such as Sumer, is not specifically known; however, the emergence of increasingly hierarchical social structures, the specialization of labor, trade and war amongst adjacent cultures, and the need for collective action to overcome environmental challenges, such as the building of dikes and reservoirs, are all thought to have played a role.

Metal tools

Continuing improvements led to the furnace and bellows and provided the ability to smelt and forge native metals (naturally occurring in relatively pure form). Gold, copper, silver, and lead, were such early metals. The advantages of copper tools over stone, bone, and wooden tools were quickly apparent to early humans, and native copper was probably used from near the beginning of Neolithic times (about 8000 BC). Native copper does not naturally occur in large amounts, but copper ores are quite common and some of them produce metal easily when burned in wood or charcoal fires. Eventually, the working of metals led to the discovery of alloys such as bronze and brass (about 4000 BC). The first uses of iron alloys such as steel dates to around 1400 BC.

Energy and Transport

Meanwhile, humans were learning to harness other forms of energy. The earliest known use of wind power is the sailboat. The earliest record of a ship under sail is shown on an Egyptian pot dating back to 3200 BC. From prehistoric times, Egyptians probably used "the power of the Nile" annual floods to irrigate their lands, gradually learning to regulate much of it through purposely built irrigation channels and 'catch' basins. Similarly, the early peoples of Mesopotamia, the Sumerians, learned to use the Tigris and Euphrates rivers for much the same purposes. But more extensive use of wind and water (and even human) power required another invention.

According to archaeologists, the wheel was invented around 4000 B.C. The wheel was probably independently invented in Mesopotamia (in present-day Iraq) as well. Estimates on when this may have occurred range from 5500 to 3000 B.C., with most experts putting it closer to 4000 B.C. The oldest artifacts with drawings that depict wheeled carts date from about 3000 B.C.; however, the wheel may have been in use for millennia before these drawings were made. There is also evidence from the same period of time that wheels were used for the production of pottery. (Note that the original potter's wheel was probably not a wheel, but rather an irregularly shaped slab of flat wood with a small hollowed or pierced area near the center and mounted on a peg driven into the earth. It would have been rotated by repeated tugs by the potter or his assistant.) More recently, the oldest-known wooden wheel in the world was found in the Ljubljana marshes of Slovenia.

The invention of the wheel revolutionized activities as disparate as transportation, war, and the production of pottery (for which it may have been first used). It didn't take long to discover that wheeled wagons could be used to carry heavy loads and fast (rotary) potters' wheels enabled early mass production of pottery. But it was the use of the wheel as a transformer of energy (through water wheels, windmills, and even treadmills) that revolutionized the application of nonhuman power sources.

Medieval and Modern history (300 AD —)

Innovations continued through the Middle Ages with new innovations such as silk, the horse collar and horseshoes in the first few hundred years after the fall of the Roman Empire. Medieval technology saw the use of simple machines (such as the lever, the screw, and the pulley) being combined to form more complicated tools, such as the wheelbarrow, windmills and clocks. The Renaissance brought forth many of these innovations, including the printing press (which facilitated the greater communication of knowledge), and technology became increasingly associated with science, beginning a cycle of mutual advancement. The advancements in technology in this era allowed a more steady supply of food, followed by the wider availability of consumer goods.

Starting in the United Kingdom in the 18th century, the Industrial Revolution was a period of great technological discovery, particularly in the areas of agriculture, manufacturing mining, metallurgy and transport, driven by the discovery of steam power. Technology later took another step with the harnessing of electricity to create such innovations as the electric motor, light bulb and countless others. Scientific advancement and the discovery of new concepts later allowed for powered flight, and advancements in medicine, chemistry, physics and engineering. The rise in technology has led to the construction of skyscrapers and large cities whose inhabitants rely on automobiles or other powered transit for transportation. Communication was also improved with the invention of the telegraph, telephone, radio and television.

The second half of the 20th century brought a host of new innovations. In physics, the discovery of nuclear fission has led to both nuclear weapons and nuclear energy. Computers were also invented later miniaturized with transistors and integrated circuits, with the creation of the Internet resulting after. Humans have also been able to explore space with satellites (later used for telecommunication) and in manned missions going all the way to the moon. In medicine, this era brought innovations such as open-heart surgery and later stem cell therapy along with new medications and treatments. Complex manufacturing and construction techniques and organizations are needed to construct and maintain these new technologies, and entire industries have arisen to support and develop succeeding generations of increasingly more complex tools. Modern technology increasingly relies on training and education — their designers, builders, maintainers, and often users often require sophisticated general and specific training. Moreover, these technologies have become so complex that entire fields have been created to support them, including engineering, medicine, and computer science, and other fields have been made more complex, such as construction, transportation and architecture.

Technicism

Generally, technicism is an over reliance or overconfidence in technology as a benefactor of society. Taken to extreme, technicism is the belief that humanity will ultimately be able to control the entirety of existence using technology. In other words, human beings will someday be able to master all problems and possibly even control the future using technology. Some, such as Stephen V. Monsma, connect these ideas to the abdication of religion as a higher moral authority.

Optimism

Optimistic assumptions are made by proponents of ideologies such as transhumanism and singularitarianism, which view technological development as generally having beneficial effects for the society and the human condition. In these ideologies, technological development is morally good. Some critics see these ideologies as examples of scientism and techno-utopianism and fear the notion of human enhancement and technological singularity which they support. Some have described Karl Marx as a techno-optimist.

Pessimism

On the somewhat pessimistic side are certain philosophers like Herbert Marcuse and John Zerzan, who believe that technological societies are inherently flawed a priori. They suggest that the result of such a society is to become evermore technological at the cost of freedom and psychological health.

Many, such as the Luddites and prominent philosopher Martin Heidegger, hold serious reservations, although not a priori flawed reservations, about technology. Heidegger presents such a view in "The Question Concerning Technology": "Thus we shall never experience our relationship to the essence of technology so long as we merely conceive and push forward the technological, put up with it, or evade it. Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it."

Some of the most poignant criticisms of technology are found in what are now considered to be dystopian literary classics, for example Aldous Huxley's Brave New World and other writings, Anthony Burgess's A Clockwork Orange, and George Orwell's Nineteen Eighty-Four. And, in Faust by Goethe, Faust's selling his soul to the devil in return for power over the physical world, is also often interpreted as a metaphor for the adoption of industrial technology.

An overtly anti-technological treatise is Industrial Society and Its Future, written by Theodore Kaczynski (aka The Unabomber) and printed in several major newspapers (and later books) as part of an effort to end his bombing campaign of the techno-industrial infrastructure.

Appropriate technology

The notion of appropriate technology, however, was developed in the 20th century (e.g., see the work of Jacques Ellul) to describe situations where it was not desirable to use very new technologies or those that required access to some centralized infrastructure or parts or skills imported from elsewhere. The eco-village movement emerged in part due to this concern.

Other animal species

This adult gorilla uses a branch as a walking stick to gauge the water's depth; an example of technology usage by primates.

The use of basic technology is also a feature of other animal species apart from humans. These include primates such as chimpanzees, some dolphin communities, and crows. Considering a more generic perspective of technology as ethology of active environmental conditioning and control, we can also refer to animal examples such as beavers and their dams, or bees and their honeycombs.

The ability to make and use tools was once considered a defining characteristic of the genus Homo. However, the discovery of tool construction among chimpanzees and related primates has discarded the notion of the use of technology as unique to humans. For example, researchers have observed wild chimpanzees utilising tools for foraging: some of the tools used include leaf sponges, termite fishing probes, pestles and levers. West African chimpanzees also use stone hammers and anvils for cracking nuts, as do capuchin monkeys of Boa Vista, Brazil.

Future techonology

Theories of technology often attempt to predict the future of technology based on the high technology and science of the time. This process is difficult if not impossible. Referring to the sheer velocity of technological innovation, Arthur C. Clarke said "Any sufficiently advanced technology is indistinguishable from magic."

http://en.wikipedia.org/wiki/Technology

 

 

Modern technology is changing the way our brains work, says neuroscientist

By SUSAN GREENFIELD

Human identity, the idea that defines each and every one of us, could be facing an unprecedented crisis.

It is a crisis that would threaten long-held notions of who we are, what we do and how we behave.

It goes right to the heart - or the head - of us all. This crisis could reshape how we interact with each other, alter what makes us happy, and modify our capacity for reaching our full potential as individuals.

And it's caused by one simple fact: the human brain, that most sensitive of organs, is under threat from the modern world.

Unless we wake up to the damage that the gadget-filled, pharmaceutically-enhanced 21st century is doing to our brains, we could be sleepwalking towards a future in which neuro-chip technology blurs the line between living and non-living machines, and between our bodies and the outside world.

It would be a world where such devices could enhance our muscle power, or our senses, beyond the norm, and where we all take a daily cocktail of drugs to control our moods and performance.

Already, an electronic chip is being developed that could allow a paralysed patient to move a robotic limb just by thinking about it. As for drug manipulated moods, they're already with us - although so far only to a medically prescribed extent.

Increasing numbers of people already take Prozac for depression, Paxil as an antidote for shyness, and give Ritalin to children to improve their concentration. But what if there were still more pills to enhance or "correct" a range of other specific mental functions?

What would such aspirations to be "perfect" or "better" do to our notions of identity, and what would it do to those who could not get their hands on the pills? Would some finally have become more equal than others, as George Orwell always feared?

Of course, there are benefits from technical progress - but there are great dangers as well, and I believe that we are seeing some of those today.

I'm a neuroscientist and my day-to-day research at Oxford University strives for an ever greater understanding - and therefore maybe, one day, a cure - for Alzheimer's disease.

But one vital fact I have learnt is that the brain is not the unchanging organ that we might imagine. It not only goes on developing, changing and, in some tragic cases, eventually deteriorating with age, it is also substantially shaped by what we do to it and by the experience of daily life. When I say "shaped", I'm not talking figuratively or metaphorically; I'm talking literally. At a microcellular level, the infinitely complex network of nerve cells that make up the constituent parts of the brain actually change in response to certain experiences and stimuli.

The brain, in other words, is malleable - not just in early childhood but right up to early adulthood, and, in certain instances, beyond. The surrounding environment has a huge impact both on the way our brains develop and how that brain is transformed into a unique human mind.

Of course, there's nothing new about that: human brains have been changing, adapting and developing in response to outside stimuli for centuries.

What prompted me to write my book is that the pace of change in the outside environment and in the development of new technologies has increased dramatically. This will affect our brains over the next 100 years in ways we might never have imagined.

Our brains are under the influence of an ever- expanding world of new technology: multichannel television, video games, MP3 players, the internet, wireless networks, Bluetooth links - the list goes on and on.

But our modern brains are also having to adapt to other 21st century intrusions, some of which, such as prescribed drugs like Ritalin and Prozac, are supposed to be of benefit, and some of which, such as widelyavailable illegal drugs like cannabis and heroin, are not.

Electronic devices and pharmaceutical drugs all have an impact on the micro- cellular structure and complex biochemistry of our brains. And that, in turn, affects our personality, our behaviour and our characteristics. In short, the modern world could well be altering our human identity.

Three hundred years ago, our notions of human identity were vastly simpler: we were defined by the family we were born into and our position within that family. Social advancement was nigh on impossible and the concept of "individuality" took a back seat.

That only arrived with the Industrial Revolution, which for the first time offered rewards for initiative, ingenuity and ambition. Suddenly, people had their own life stories - ones which could be shaped by their own thoughts and actions. For the first time, individuals had a real sense of self.

But with our brains now under such widespread attack from the modern world, there's a danger that that cherished sense of self could be diminished or even lost.

Anyone who doubts the malleability of the adult brain should consider a startling piece of research conducted at Harvard Medical School. There, a group of adult volunteers, none of whom could previously play the piano, were split into three groups.

The first group were taken into a room with a piano and given intensive piano practise for five days. The second group were taken into an identical room with an identical piano - but had nothing to do with the instrument at all.

And the third group were taken into an identical room with an identical piano and were then told that for the next five days they had to just imagine they were practising piano exercises.

The resultant brain scans were extraordinary. Not surprisingly, the brains of those who simply sat in the same room as the piano hadn't changed at all.

Equally unsurprising was the fact that those who had performed the piano exercises saw marked structural changes in the area of the brain associated with finger movement.

But what was truly astonishing was that the group who had merely imagined doing the piano exercises saw changes in brain structure that were almost as pronounced as those that had actually had lessons. "The power of imagination" is not a metaphor, it seems; it's real, and has a physical basis in your brain.

Alas, no neuroscientist can explain how the sort of changes that the Harvard experimenters reported at the micro-cellular level translate into changes in character, personality or behaviour. But we don't need to know that to realise that changes in brain structure and our higher thoughts and feelings are incontrovertibly linked.

What worries me is that if something as innocuous as imagining a piano lesson can bring about a visible physical change in brain structure, and therefore some presumably minor change in the way the aspiring player performs, what changes might long stints playing violent computer games bring about? That eternal teenage protest of 'it's only a game, Mum' certainly begins to ring alarmingly hollow.

Already, it's pretty clear that the screen-based, two dimensional world that so many teenagers - and a growing number of adults - choose to inhabit is producing changes in behaviour. Attention spans are shorter, personal communication skills are reduced and there's a marked reduction in the ability to think abstractly.

This games-driven generation interpret the world through screen-shaped eyes. It's almost as if something hasn't really happened until it's been posted on Facebook, Bebo or YouTube.

Add that to the huge amount of personal information now stored on the internet - births, marriages, telephone numbers, credit ratings, holiday pictures - and it's sometimes difficult to know where the boundaries of our individuality actually lie. Only one thing is certain: those boundaries are weakening.

And they could weaken further still if, and when, neurochip technology becomes more widely available. These tiny devices will take advantage of the discovery that nerve cells and silicon chips can happily co-exist, allowing an interface between the electronic world and the human body. One of my colleagues recently suggested that someone could be fitted with a cochlear implant (devices that convert sound waves into electronic impulses and enable the deaf to hear) and a skull-mounted micro- chip that converts brain waves into words (a prototype is under research).

Then, if both devices were connected to a wireless network, we really would have arrived at the point which science fiction writers have been getting excited about for years. Mind reading!

He was joking, but for how long the gag remains funny is far from clear.

Today's technology is already producing a marked shift in the way we think and behave, particularly among the young.

I mustn't, however, be too censorious, because what I'm talking about is pleasure. For some, pleasure means wine, women and song; for others, more recently, sex, drugs and rock 'n' roll; and for millions today, endless hours at the computer console.

But whatever your particular variety of pleasure (and energetic sport needs to be added to the list), it's long been accepted that 'pure' pleasure - that is to say, activity during which you truly "let yourself go" - was part of the diverse portfolio of normal human life. Until now, that is.

Now, coinciding with the moment when technology and pharmaceutical companies are finding ever more ways to have a direct influence on the human brain, pleasure is becoming the sole be-all and end-all of many lives, especially among the young.

We could be raising a hedonistic generation who live only in the thrill of the computer-generated moment, and are in distinct danger of detaching themselves from what the rest of us would consider the real world.

This is a trend that worries me profoundly. For as any alcoholic or drug addict will tell you, nobody can be trapped in the moment of pleasure forever. Sooner or later, you have to come down.

I'm certainly not saying all video games are addictive (as yet, there is not enough research to back that up), and I genuinely welcome the new generation of "brain-training" computer games aimed at keeping the little grey cells active for longer.

As my Alzheimer's research has shown me, when it comes to higher brain function, it's clear that there is some truth in the adage "use it or lose it".

However, playing certain games can mimic addiction, and that the heaviest users of these games might soon begin to do a pretty good impersonation of an addict.

Throw in circumstantial evidence that links a sharp rise in diagnoses of Attention Deficit Hyperactivity Disorder and the associated three-fold increase in Ritalin prescriptions over the past ten years with the boom in computer games and you have an immensely worrying scenario.

But we mustn't be too pessimistic about the future. It may sound frighteningly Orwellian, but there may be some potential advantages to be gained from our growing understanding of the human brain's tremendous plasticity. What if we could create an environment that would allow the brain to develop in a way that was seen to be of universal benefit?

I'm not convinced that scientists will ever find a way of manipulating the brain to make us all much cleverer (it would probably be cheaper and far more effective to manipulate the education system). And nor do I believe that we can somehow be made much happier - not, at least, without somehow anaesthetising ourselves against the sadness and misery that is part and parcel of the human condition.

When someone I love dies, I still want to be able to cry.

But I do, paradoxically, see potential in one particular direction. I think it possible that we might one day be able to harness outside stimuli in such a way that creativity - surely the ultimate expression of individuality - is actually boosted rather than diminished.

I am optimistic and excited by what future research will reveal into the workings of the human brain, and the extraordinary process by which it is translated into a uniquely individual mind.

But I'm also concerned that we seem to be so oblivious to the dangers that are already upon us.

Well, that debate must start now. Identity, the very essence of what it is to be human, is open to change - both good and bad. Our children, and certainly our grandchildren, will not thank us if we put off discussion much longer.

 

http://www.dailymail.co.uk/sciencetech/article-565207/Modern-technology-changing-way-brains-work-says-neuroscientist.html

 

 


Дата добавления: 2018-05-13; просмотров: 289; Мы поможем в написании вашей работы!

Поделиться с друзьями:






Мы поможем в написании ваших работ!