Electricity transmission - common methods and alternatives

The transfer of thermal energy is called heat transfer. There are three ways (Fig. 1) of transferring thermal energy:

  • thermal conductivity,
  • convection,
  • radiation.


Fig.1.
Three methods of heat transfer Using heat transfer, you can change the internal energy of bodies.

High voltage as a way to reduce losses

Despite the fact that the internal networks of most consumers, as a rule, have 220/380 V, electricity is transmitted to them via high-voltage mains and reduced at transformer substations. There are good reasons for this operating scheme; the fact is that the largest share of losses occurs due to heating of the wires.

Power loss is described by the following formula: Q = I2 * Rl,

where I is the current strength passing through the line, RL is its resistance.

Based on the above formula, we can conclude that costs can be reduced by reducing the resistance in power lines or by lowering the current. In the first case, it will be necessary to increase the cross-section of the wire; this is unacceptable, since it will lead to a significant increase in the cost of power transmission lines. By choosing the second option, you will need to increase the voltage, that is, the introduction of high-voltage power lines leads to a reduction in power losses.

Protocol Layers

Digital data transmission by modem was implemented in 1940. Networks appeared 25 years later.

Increasingly complex communication systems required the introduction of new methods for describing the process of interaction between computer systems. The OSI conceptual model introduces the concept of protocol (abstract, non-existent) layers. The structure was created through the efforts of engineers of the International Organization for Standardization (ISO) and is regulated by the ISO/IEC 7498-1 standard. Parallel work was carried out by the French CCITT committee. In 1983, the developed documents were combined to form a model of protocol layers.

The concept of the 7-layer structure is represented by the work of Charles Bachman. The OSI model includes experience in the development of ARPANET, EIN, NPLNet, CYCLADES. The line of resulting layers interacts vertically with its neighbors: the top one uses the capabilities of the bottom one.

Important! Each OSI layer corresponds to a set of protocols determined by the system being used.

In computer lines, a set of protocols is divided into layers. There are:

  1. Physical (bits): USB, RS-232, 8P8C.
  2. Channel (frames): PPP (including PPPoE, PPPoA), IEEE 802.22, Ethernet, DSL, ARP, LP2P. Obsolete: Token Ring, FDDI, ARCNET.
  3. Network (connections): IP, AppleTalk.
  4. Transport (datagrams, segments): TCP, UDP, PORTS, SCTP.
  5. Session: RPC, PAP.
  6. Representative: ASCII, JPEG, EBCDIC.
  7. Application: HTTP, FTP, DHCP, SNMP, RDP, SMTP.

Physical layer

Why do developers need a hundred standards? Many documents appeared evolutionarily, according to increasing demands. The physical layer is implemented by a set of connectors, wires, and interfaces. For example, shielded twisted pair cable is capable of transmitting high frequencies, making it possible to implement protocols with a bit rate of 100 Mbis/s. Optical fiber transmits light, the spectrum is further expanded, and gigabit networks emerge.

The physical layer manages digital modulation schemes, physical coding (carrier formation, information storage), forward error correction, synchronization, channel multiplexing, and signal equalization.

Channel layer

Each port is controlled by its own machine instructions. The channel layer shows how to implement the transfer of formatted information using existing hardware. For example, PPPoE contains recommendations for organizing the PPP protocol using Ethernet networks; the port traditionally used is 8P8C. Through evolutionary struggle, the “ethereal network” was able to suppress its rivals. The inventor of the concept, the founder of 3COM, Robert Metcalf, managed to convince several large manufacturers (Intel, DEC, Xerox) to join forces.

Along the way, the channels were improved: coaxial cable → twisted pair → optical fiber. The changes pursued the following goals:

  • reduction in price;
  • increasing reliability;
  • introduction of duplex mode;
  • increasing noise immunity;
  • galvanic isolation;
  • powering devices via network cable.

The optical cable has increased the length of the segment between signal regenerators. The channel protocol describes more the structure of the network, including encoding methods, bitrate, number of nodes, and operating mode. The layer introduces the concept of a frame, implements MAC address decoding schemes, detects errors, resends the request, and controls the frequency.

Network

The generally accepted IP protocol determines the structure of the packet and introduces a specific address from four groups of numbers that are known to everyone today. Some masks are reserved. Resource owners are assigned names according to DNS server databases. The network configuration is largely irrelevant. Weak restrictions are introduced. For example, Ethernet required a unique MAC address. The IP protocol cuts the maximum number of PCs to 4.3 billion. Humanity has had enough for now.

The network address is usually divided into domains. For technical reasons, there is no uniform correspondence to the four groups of numbers. The Internet itself is denoted by the abbreviation www (an abbreviation for the world wide web, otherwise known as the World Wide Web). Today, the Uniform Address (URL) omits trivial letters. This means that the person who opened the browser clearly intends to surf the World Wide Web from the computer.

Transport

The layer further expands the structure of the format. The formation of a TCP segment combines packets, simplifying the search for lost information and guaranteeing recovery.

Classification of power lines

In the energy industry, it is customary to divide power lines into types depending on the following indicators:

  1. Design features of lines transmitting electricity. Depending on the design, they can be of two types:
  • By air. Electricity is transmitted using wires that are suspended from supports.


    Overhead power lines

  • Cable. This installation method involves laying cable lines directly into the ground or into engineering systems specially designed for this purpose.


    Installation of block cable ducts

  1. Voltage . Depending on the voltage value, power lines are usually classified into the following types:
  • Low-voltage, these include all overhead lines with a voltage of no more than 1 kV.
  • Medium - from 1 to 35 kV.
  • High voltage - 110.0-220.0 kV.
  • Ultra-high voltage - 330.0-750.0 kV.
  • Ultra-high voltage - more than 750 kV.


    Ultra-high voltage power line Ekibastuz-Kokchet 1150 kV

  1. Separation by type of current during the transmission of electricity , it can be alternating and constant. The first option is more common since power plants are usually equipped with alternating current generators. But to reduce load energy losses, especially over long transmission distances, the second option is more effective. How electricity transmission schemes are organized in both cases, as well as the advantages of each of them, will be described below.
  2. Classification depending on purpose . For this purpose the following categories are adopted:
  • Lines from 500.0 kV for ultra-long distances. Such overhead lines connect individual energy systems.
  • Trunk transmission lines (220.0-330.0 kV). With the help of such lines, electricity generated at powerful hydroelectric power stations, thermal and nuclear power plants is transmitted, as well as their integration into a single energy system.
  • 35-150 kV power lines are distribution lines. They serve to supply electricity to large industrial sites, connect regional distribution points, etc.
  • Power lines with voltages up to 20.0 kV are used to connect groups of consumers to the electrical network.

Etiology

The English usually use the plural – data. We ask Slavophiles to avoid reproaches. Modern science was developed by Europe, the heir of the Roman Empire. We will bypass the issue of deliberate destruction of national history, leaving the debate to historians. Some experts trace the etymology to the ancient Indian word dati (gift). Dahl calls data indisputable, obvious, known facts of an arbitrary kind.

This is interesting! Literary English (New York Times) deprives the word data of numbers. Use as necessary: ​​plural, singular. Textbooks often make strict divisions. The singular number is datum. A separate issue concerns the article, which will not be discussed here. Experts tend to consider the noun “massive.”

Methods of transmitting electricity

There are two ways to transfer electricity:

  • Direct transmission method.
  • Converting electricity into another form of energy.

In the first case, electricity is transmitted through conductors, which are a wire or a conductive medium. This transmission method is used in overhead and cable power lines. Converting electricity into another form of energy opens up the prospect of wireless supply to consumers. This will eliminate the need for power lines and, accordingly, the costs associated with their installation and maintenance. Below are promising wireless technologies that are being improved.


Technologies for wireless transmission of electricity

Unfortunately, at the moment, the possibilities of transporting electricity wirelessly are very limited, so it is too early to talk about an effective alternative to the direct transmission method. Research work in this direction allows us to hope that a solution will be found in the near future.

Scheme of electricity transmission from power plant to consumer

The figure below shows typical circuits, of which the first two are open-loop, the rest are closed-loop. The difference between them is that open-loop configurations are not redundant, that is, they do not have backup lines that can be used when the electrical load increases critically.


Example of the most common power line configurations

Designations:

  1. Radial diagram, at one end of the line there is a power plant producing energy, at the other there is a consumer or distribution device.
  2. The main version of the radial circuit, the difference from the previous version is the presence of branches between the initial and final transmission points.
  3. Main circuit with power supply at both ends of the power line.
  4. Ring type configuration.
  5. Trunk with a backup line (double trunk).
  6. Complex configuration option. Similar schemes are used when connecting critical consumers.

Now let's look in more detail at the radial circuit for transmitting generated electricity via AC and DC power lines.


Rice. 6. Schemes for transmitting electricity to consumers when using power lines with alternating (A) and direct (V) current

Designations:

  1. A generator where electricity with a sinusoidal characteristic is generated.
  2. Substation with step-up three-phase transformer.
  3. A substation with a transformer that reduces the voltage of three-phase alternating current.
  4. An outlet for transmitting electrical energy to a distribution device.
  5. A rectifier, that is, a device that converts three-phase alternating current into direct current.
  6. The inverter unit, its task is to form a sinusoidal voltage from a constant voltage.

As can be seen from diagram (A), electricity is supplied from an energy source to a step-up transformer, then, using overhead power lines, electricity is transported over considerable distances. At the end point, the line is connected to a step-down transformer and from it goes to the distributor.

The method of transmitting electricity in the form of direct current (B in Fig. 6) differs from the previous scheme by the presence of two converter blocks (5 and 6).

Closing the topic of this section, for clarity, we present a simplified version of the city network diagram.


A clear example of a power supply block diagram

Designations:

  1. A power plant where electricity is produced.
  2. A substation that increases voltage to ensure high efficiency in transmitting electricity over long distances.
  3. High voltage power lines (35.0-750.0 kV).
  4. Substation with step-down functions (output 6.0-10.0 kV).
  5. Electricity distribution point.
  6. Power cable lines.
  7. The central substation at an industrial facility serves to reduce the voltage to 0.40 kV.
  8. Radial or trunk cable lines.
  9. Introductory panel in the workshop room.
  10. District distribution substation.
  11. Cable radial or trunk line.
  12. Substation that reduces voltage to 0.40 kV.
  13. Input panel of a residential building for connecting the internal electrical network.

Transmission of electricity over long distances

The main problem associated with this task is the increase in losses with increasing length of power lines. As mentioned above, to reduce energy costs for transmitting electricity, the current is reduced by increasing the voltage. Unfortunately, this solution gives rise to new problems, one of which is corona discharges.

From the point of view of economic feasibility, losses in overhead lines should not exceed 10%. Below is a table showing the maximum length of lines that meet the profitability conditions.

Table 1. Maximum length of power lines taking into account profitability (no more than 10% losses)

Overhead voltage (kV)Length (km)
0,401,0
10,025,0
35,0100,0
110,0300,0
220,0700,0
500,02300,0
1150,0*4500,0*

* - at the moment, the ultra-high-voltage overhead line has been switched to operating at half the rated voltage (500.0 kV).

Types of data

Historically, information has been presented in a variety of ways. Let us leave the hieroglyphs of the papyri to historians and examine modern techniques. The greatest impact was made by the development of electricity. If a person had learned to convey thoughts, the symbolism would have come out differently...

Analog signal

The first attempts to measure analog quantities are the experiments of Volta, who measured voltage and current. Next, Georg Ohm was able to estimate the resistance of the conductor. Analog values ​​were used each time. The representation of the characteristics of an object in the form of current and voltage gave a powerful impetus to the development of the modern world. A cathode-ray kinescope with three-color pixel brightness displays a fairly clear picture.

The reasons for the move away from the analog signal were revealed by the Second World War. The Green Hornet system was able to perfectly encrypt information. A 6-level signal can hardly be called digital, but there is a clear bias. Historically, the first attempt to transmit a binary code is Schilling's 1832 experiments with the telegraph. In an effort to reduce the number of wires connecting subscribers, the diplomat recalled the binary number methods proposed by the priests. However, the introduction of digital transmission required humanity to travel over a century and a half.

Binary digital code

Binary number is well known. An analog value is represented as a discrete number, then encoded. The resulting set of zeros and ones is usually divided into words 8 bits long. For example, the first Windows operating systems were 16-bit; the processor's graphics module processed higher-bit floating point numbers. Even longer words are used by specialized graphics card computers. The specifics of the system determine the specific way of presenting information.

Data transfer allows humanity to move forward faster. People have different abilities. Not necessarily the best collector, custodian of information will be able to benefit (for himself, the planet, the city...). It makes more sense to pass it on. The modern world is called the era of the digital revolution. Historically, it turned out that it is easier to transmit binary data; a set of specific capabilities appears:

  1. Error correction.
  2. Encryption.
  3. Simplification of physical lines.
  4. More efficient use of spectrum, reduction of transmitter power, specific energy flux density.
  5. Error Recognition (EDC, 1951).
  6. Possibility of exact repetition and playback.

The second half of the 20th century provided hundreds of techniques for digitizing analog objects. The main feature of a binary signal is discreteness. The code is powerless to reliably convey an analog value. However, the sampling step has become so small that the error is neglected. A striking example is Full HD images. High screen resolution conveys the fine nuances of an object much better. At some stage, the resolution of digital technology overtakes the physiological capabilities of human vision.

Direct current as an alternative

As an alternative to AC power transmission over long distances, constant voltage overhead lines can be considered. Such power lines have the following advantages:

  • The length of the overhead line does not affect the power, while its maximum value is significantly higher than that of power lines with alternating voltage. That is, if electricity consumption increases (up to a certain limit), you can do without modernization.
  • Static stability can be ignored.
  • There is no need to synchronize the frequency of the connected power systems.
  • It is possible to organize the transmission of electricity via a two-wire or single-wire line, which significantly simplifies the design.
  • Less influence of electromagnetic waves on communications.
  • There is virtually no generation of reactive power.

Despite the listed capabilities of DC power lines, such lines are not widespread. This is primarily due to the high cost of the equipment required to convert sinusoidal voltage into direct voltage. DC generators are practically not used, with the exception of solar power plants.

With inversion (a process completely opposite to rectification) everything is also not simple; it is necessary to obtain high-quality sinusoidal characteristics, which significantly increases the cost of the equipment. In addition, one should take into account problems with organizing power take-off and low profitability when the length of overhead lines is less than 1000-1500 km.

Briefly about superconductivity.

The resistance of the wires can be significantly reduced by cooling them to ultra-low temperatures. This would make it possible to bring the efficiency of electricity transmission to a qualitatively new level and increase the length of lines for using electricity at a great distance from the place of its production. Unfortunately, currently available technologies cannot allow the use of superconductivity for these purposes due to economic infeasibility.

The idea of ​​openness

The idea of ​​free access to information was put forward by the father of sociology, Robert King Merton, who observed the Second World War. Since 1946, it involves the transfer and storage of computer information. 1954 added processing capability. In December 2007, those who wanted to discuss the problem gathered (Sebastopol, California) and conceptualized open source software, the Internet, and the potential of the concept of mass access. Obama adopted the Memorandum of Transparency and Openness in Government Actions.


Robert King Merton

Humanity's awareness of the real potential of civilization is accompanied by calls to jointly solve problems. The concept of data openness is widely discussed in a document (1995) by the American Science Agency. The text touches on geophysics and ecology. A well-known example is the DuPont Corporation, which used some controversial Teflon production technologies.

Heat transfer features

What is heat transfer? What are the features of this phenomenon? It is impossible to stop it completely, can you only reduce the speed of its flow? Is heat transfer used in nature and technology? It is heat exchange that accompanies and characterizes many natural phenomena: the evolution of planets and stars, meteorological processes on the surface of our planet. For example, together with mass exchange, the heat transfer process makes it possible to analyze evaporative cooling, drying, and diffusion. It is carried out between two carriers of thermal energy through a solid wall, acting as the interface between the bodies.

Heat transfer in nature and technology is a way of characterizing the state of an individual body and analyzing the properties of a thermodynamic system.

Newton's Law of Cooling and Coefficients

Most often, liquids and gases are heated or cooled when they come into contact with the surface of various solid objects. This process of heat exchange is called heat transfer, and the surface that transfers heat is called the “heat exchange surface” or “heat-transfer surface.”

The rate of heat transfer can be calculated using an empirical heat transfer equation based on Newton's law of cooling. If the process is established, then the equation looks like this: Q = α*F*(tl - tst)*τ, where:

  • Q—heat flux;
  • α is the heat transfer coefficient, showing how much heat a coolant of 1 m² receives or gives off in a certain period of time, if the temperature difference between the components is 1 °C (this value characterizes the speed of heat movement in the coolant, it depends on the mode of movement, the physical properties of the coolant, geometry of channels, state of the surface releasing energy);
  • F—heat-releasing surface;
  • tl is the temperature of the substance;
  • tst—wall temperature;
  • τ - time.

When considering the process of heat transfer in a solid wall, a prerequisite is the difference between the temperatures of the surfaces. It forms a heat flow that is directed from the plane with the highest temperature to the surface with a lower temperature. If the process is established, then Fourier’s law takes the form: Q = λ*F*(t'st - t'st)/δ, where:

  • Q—heat flow;
  • λ is the coefficient of thermal conductivity, showing how much heat passes per time unit through a certain segment of the heat-releasing surface if the temperature drops by 1 °C per unit length of the normal relative to the isothermal surface (this is a physical characteristic that determines the ability of a substance to conduct heat, depending on its nature, structure and other indicators);
  • F—wall surface;
  • t'st - t'st - temperature difference between the wall surfaces;
  • δ—wall thickness.

Often, to solve problems in physics, it is necessary to calculate heat transfer using formulas suitable for various types of processes. This difference is explained by the different physical characteristics of substances, as well as the peculiarities of heat transfer methods.

Convection

Answering the question of what heat transfer is, let’s consider the process of heat transfer in liquids or gases through spontaneous or forced mixing. In the case of forced convection, the movement of matter is caused by the influence of external forces: fan blades, pump. A similar option is used in situations where natural convection is not effective.

A natural process is observed in cases where, due to uneven heating, the lower layers of the substance are heated. Their density decreases and they rise upward. The upper layers, on the contrary, cool, become heavier, and sink down. Then the process is repeated several times, and with mixing, self-organization into a structure of vortices is observed, and a regular lattice is formed from convection cells.

Thanks to natural convection, clouds form, precipitation occurs, and tectonic plates move. It is through convection in the Sun that granules are formed.

Proper use of heat transfer ensures minimum heat loss, maximum consumption.

The essence of convection

Archimedes' law can be used to explain convection, as well as solids and liquids. As the temperature rises, the volume of the liquid increases and the density decreases. Under the influence of Archimedes' force, the lighter (heated) liquid tends upward, and the cold (dense) layers fall down and gradually warm up.

If the liquid is heated from above, the warm liquid remains in its original position, so no convection is observed. This is how the fluid cycle occurs, which is accompanied by the transfer of energy from heated areas to cold places. In gases, convection occurs by a similar mechanism.

From a thermodynamic point of view, convection is considered as a heat transfer option in which the transfer of internal energy occurs in separate flows of substances that are heated unevenly. A similar phenomenon occurs in nature and in everyday life. For example, heating radiators are installed at a minimum height from the floor, near the window sill.

Cold air is warmed up by the radiator, then gradually rises upward, where it mixes with cold air masses descending from the window. Convection leads to a uniform temperature in the room.

Among the common examples of atmospheric convection are winds: monsoons, breezes. The air, which is heated over some fragments of the Earth, cools over others, as a result of which it circulates and transfers moisture and energy.

Energy saving

Although energy can change its form, it cannot simply disappear. If you trace the source, you will find that it simply does not appear out of nowhere.

These discoveries led scientists to approve the energy law.

The first part says that energy must come from somewhere. It is never created out of nothing, but can change from one form to another, but the total quantity remains the same. Energy chains usually start with some form of potential energy. If you trace many energy chains, you will find that it comes from nuclear reactions inside the Sun, which convert the energy stored in atomic nuclei into thermal and radiant energy.

  • According to the Conservation Law: input = output
  • This equation can be changed to: consumption = useful + waste

Designers are concerned with making devices that produce maximum efficiency.

  • This is measured by energy efficiency: energy efficiency % = useful energy x 100/consumed

The human body is not very efficient at converting energy. An athlete uses up to 40,000 joules of chemical (food) energy during a 100m sprint. Only 8,000 of this is converted into running kinetic energy. The rest is wasted as heat!

The amount of energy converted by a machine every second is called machine power. Power is measured in watts (1 watt equals 1 joule of energy converted every second).

Information processing 1.2

After receiving the necessary information, it becomes necessary to store and transmit it. Methods of transmitting and processing information clearly represent the stages of human development.

  • At the beginning of its development, data processing involved transferring it to paper using ink, pen, pen, etc. However, the disadvantage of this processing method was the unreliability of storage. If we mention methods of storing and transmitting information, storage on paper has a certain period, which is determined by the service life of the paper, as well as the conditions of its use.
  • The next stage is mechanical information technology, which uses a typewriter, telephone, and voice recorder.
  • Further, the mechanical information processing system was replaced by an electrical one, because methods of transmitting information are constantly being improved. Such means include electric typewriters, portable voice recorders, and copying machines.

Concept of communication

Communication is a system of interaction between several objects. In a generalized sense, this is the transfer of information from one object to another. Communication is the key to the success of an organization.

Methods of transmitting information (communication) perform the following functions: organizational, interactive, expressive, incentive, perceptual.

The organizational function provides a system of relationships between employees; interactive allows you to shape the mood of those around you; expressive colors the mood of others; incentive calls for action; perceptual allows different interlocutors to understand each other.

Rating
( 2 ratings, average 4.5 out of 5 )
Did you like the article? Share with friends:
For any suggestions regarding the site: [email protected]
Для любых предложений по сайту: [email protected]