Computer

This article explains the basics of all computers. For the type of the currently most common computer, see Personal Computer.

Data processing system and Electronic data processing system are redirections to this article. For general information on the basics of data processing or electronic data processing, see there.

A computer (English; German pronunciation [kɔmˈpjuːtɐ]) or calculator is a device that processes data by means of programmable calculation rules. Accordingly, the abstracting or outdated, synonymously used terms computer, data processing system or electronic data processing system as well as electron brain are also occasionally encountered.

Charles Babbage and Ada Lovelace are considered the pioneers of the modern universally programmable computer through the Analytical Engine, designed by Babbage in 1837. Konrad Zuse (Z3, 1941 and Z4, 1945) in Berlin, John Presper Eckert and John William Mauchly (ENIAC, 1946) built the first functional devices of this kind. In classifying a device as a universally programmable computer, Turing completeness plays an essential role. It is named after the English mathematician Alan Turing, who introduced the logical model of the Turing machine in 1936.

Early computers were also called (big) calculators; their input and output of data was initially limited to numbers. It is true that modern computers know how to handle other data, such as letters and sounds. However, this data is converted into numbers within the computer and processed as such, which is why a computer is still a calculating machine today.

With increasing performance, new areas of application opened up. Today, computers can be found in all areas of daily life, usually in specialized variants tailored to a specific application. For example, integrated microcomputers (embedded systems) are used to control everyday appliances such as washing machines and video recorders or to check coins in vending machines; in modern automobiles, for example, they are used to display driving data and control various maneuvers themselves in "driving assistants".

Universal computers are found in smartphones and game consoles. Personal computers (as opposed to mainframe computers used by many) are used for information processing in business and public authorities as well as by private individuals; supercomputers are used to simulate complex processes, e.g. in climate research or for medical calculations.

Term History

Calculator

The German term Rechner is derived from the verb rechnen. For etymology, see Rechnen#Etymologie.

Computer

The English noun computer is derived from the English verb to compute. The latter is derived from the Latin verb computare, which means to add up.

The English term computer was originally a job title for assistants who performed recurring calculations (e.g. for astronomy, geodesy or ballistics) on behalf of mathematicians, filling tables such as a logarithm table.

In early church history, the Jewish calendar was replaced by the Julian calendar. The resulting difficulties in calculating the date of Easter lasted until the Middle Ages and were the subject of numerous publications, often titled Computus Ecclesiasticus. However, other titles, e.g. by Sigismund Suevus in 1574, can be found dealing with arithmetical issues. The earliest text in which the word computer is used in isolation dates from 1613.

The word first appeared in The New York Times newspaper on May 2, 1892, in a United States Navy classified ad entitled A Computer Wanted, which assumed knowledge of algebra, geometry, trigonometry, and astronomy.

Ballistic tables were calculated at the University of Pennsylvania in Philadelphia on behalf of the United States Army. The result were books for artillery that predicted trajectories of different projectiles for different guns. These calculations were largely done by hand. The only help was a tabulating machine that could multiply and divide. The clerks who calculated there were called "computers" (in the sense of a human computer). The term was used for the first time in 1946 for the Electronic Numerical Integrator and Computer (ENIAC), a technical device developed there. The term has been used in Germany since 1962.

Basics

Basically, there are two different designs: A computer is a digital computer if it processes digital data (i.e. numbers and text characters) with digital device units; it is an analog computer if it processes analog data (i.e. continuously running electrical measured variables such as voltage or current) with analog device units.

Today, digital computers are used almost exclusively. These follow common basic principles which enable their free programming. There are two basic components of a digital computer: The hardware, which is formed by the electronic, physically tangible parts of the computer, and the software, which describes the programming of the computer.

A digital computer initially consists only of hardware. First, the hardware provides a memory in which data can be stored in portions, as on the numbered pages of a book, and retrieved at any time for processing or output. Second, the computational machinery of the hardware has basic building blocks for free programming that can be used to represent any processing logic for data: These building blocks are basically computation, comparison, and conditional jump. For example, a digital computer can add two numbers, compare the result with a third number, and then proceed either at one point or the other in the program depending on the result. In computer science, this model is theoretically represented by the Turing machine mentioned at the beginning; the Turing machine represents the basic considerations of computability.

However, it is only through software that the digital computer becomes useful. Each software is in principle a defined, functional arrangement of the above described building blocks calculation, comparison and conditional jump, whereby the building blocks can be used as often as desired. This arrangement of building blocks, called a program, is stored in the form of data in the computer's memory. From there it can be read and processed by the hardware. This operating principle of digital computers has not changed significantly since its origins in the mid-20th century, although the details of the technology have been improved considerably.

Analog computers work according to a different principle. In them, analog components (amplifiers, capacitors) replace the logic programming. Analog computers used to be used more frequently for the simulation of control processes (see: control engineering), but today they have been almost completely replaced by digital computers. In a transitional period, there were also hybrid computers that combined an analog computer with a digital computer.

Possible uses for computers include:

  • Media design (image and text processing)
  • Management and archiving applications
  • Control of machines and processes (printers, production in industry by e.g. robots, embedded systems)
  • Calculations and simulations (e.g. BOINC)
  • Media playback (Internet, television, videos, entertainment applications such as computer games, educational software)
  • Communication (chat, e-mail, social networks)
  • Software development

Hardware architecture

The principle generally used today, called the Von Neumann architecture after its description by John von Neumann in 1946, defines five main components for a computer:

  • the arithmetic-logic unit (ALU),
  • the control plant,
  • the bus unit,
  • the storage plant and
  • the input/output plant(s).

In today's computers, the ALU and the control unit are usually merged into one component, the so-called CPU (Central Processing Unit).

The memory is a number of numbered, addressable "cells"; each of them can hold a single piece of information. This information is stored in the memory cell as a binary number, i.e. a sequence of yes/no information in the sense of ones and zeros.

With regard to the memory device, an important design decision of the Von Neumann architecture is that program and data share a memory area (data usually occupy the lower memory area and programs the upper memory area). In contrast, in the Harvard architecture, data and programs have their own (physically separate) memory areas. The access to the memory areas can be realized in parallel, which leads to speed advantages. For this reason, digital signal processors are often implemented in Harvard architecture. Furthermore, data write operations in the Harvard architecture cannot overwrite programs (information security).

In the Von Neumann architecture, the control unit is responsible for memory management in the form of read and write accesses.

The ALU has the task of combining values from memory cells. It receives the values from the control unit, calculates them (adds two numbers, for example) and returns the value to the control unit, which can then use the value for a comparison or write it to another memory cell.

Finally, the input/output units are responsible for entering the initial programs into the memory cells and displaying the results of the calculation to the user.

Software Architecture

The Von Neumann architecture is, so to speak, the lowest level of the functional principle of a computer above the electrophysical processes in the conductor paths. The first computers were actually programmed in such a way that the numbers of instructions and of certain memory cells were written into the individual memory cells one after the other as the program required. To reduce this effort, programming languages were developed. These automatically generate the numbers within the memory cells, which the computer ultimately processes as a program, from text commands, which also represent semantically understandable content for the programmer (e.g. GOTO for the "unconditional jump").

Later, certain repetitive procedures were grouped together in so-called libraries, so as not to have to reinvent the wheel each time, e.g.: interpreting a pressed keyboard key as the letter "A" and thus as the number "65" (in ASCII code). The libraries were bundled into higher-level libraries, which link sub-functions to complex operations (example: the display of a letter "A" consisting of 20 individual black dots and 50 individual white dots on the screen after the user has pressed the "A" key).

In a modern computer, a great many of these program levels work over or under each other. More complex tasks are broken down into subtasks that have already been processed by other programmers, who in turn build on the preliminary work of other programmers whose libraries they use. At the lowest level, however, there is always the so-called machine code - the sequence of numbers with which the computer is actually controlled.

Computer system

A computer system is defined as:

  • a network or association of multiple computers that are individually controlled and can access shared data and devices;
  • the totality of external and internal components, i.e. hardware, software as well as connected peripheral devices, which determine the interaction of a single fully functional computer;
  • a system of programs for controlling and monitoring computers.

AlegsaOnline.com - 2020 / 2023 - License CC3