LANGUAGE LOCALIZATION OF WEB MAPS



Clarity, simplicity, and unity of multilingual map content to which programmers and topologists strive for while carrying out mapping services are a matter of the near future considering recent simple but elegant solutions already available for users. Such multilingual web mapping services as Bing Maps, Yandex .Maps and Google Maps have provided users with adequate place names adjusting them to the user’s language assigned basing on the user's Geo-IP service. Map presentation with adjustable legends improves the interaction between services and users around the world, meets various users requirements, just like the intelligent assistants for mobile phones that adapt to the user’s language. [2, p.590].

Bing Maps service of Microsoft generates place names across the world map in the user's language, allowing the local language for special cases. While accessing the Web with a Russian Geo-IP one can expect the names of municipal locations to be translated into Russian, yet original language notations for street names are often seen because translating street names arouse numerous discrepancies in the target language and cause ambiguity. Both Yandex.Maps and Google Maps developers let users see a double legend including a legend in their own language, and the original language [1, p.236]. On the maps one can see the names of parks, subway stations and other municipal locations given both in the target language and in the source language, and most street names again are given in the source language.

Another mapping service OpenStreetMap, on the other hand, doesn’t adjust place names basing on the user's Geo-IP service but uses permanent place names given for each region making it available for a wider community of users to update legends on the map [3, p460].

The above mentioned services though different in many aspects have a common drawback. Applying corrections for the objects’ names in most of these maps now takes a lot of actions. The more comprehensive and sophisticated these multilingual web mapping services become the clearer it is that they need on-the-go method to update map legends to distance rivals and attract more users from different countries.

One of the recent solutions is to use the so-called vector tiles, a technology which allows keeping the generated place names in attribute tables and providing them when the map is displayed. Vector tiles have been adopted by some major mapping platforms over the past few years, building on standards that were pioneered by OpenStreetMap and Mapnik [4]. Now more than a dozen other companies and open-source projects implement the same vector tiles format. In this case layers of objects have attribute tables in different languages and basing on the user's Geo-IP service the map is displayed with place names in the appropriate language. This allows for creating Web maps that are localized for different countries and applying corrections for the objects’ names on-the-go.

Text 5

Locker L. Virtual Reality and Medicine (n.d.) Retrieved March 3, 2017 from http://www.faculty.rsu.edu/users/c/clayton/www/locker/ paper.htm/

VIRTUAL REALITY APPLICATION IN MEDICINE

In this article we are going to discuss one of the amazing and astounding things in the world today. And this is a perspective computer technology of Virtual Reality (VR) ant its implementation. But first, imagine that you are standing in your room, next you click on a button and then you are standing in the hundred miles away on the highest pick of Kilimanjaro. Can it be real? Fifty years ago it was just an idea of a few dreamers, which really want to change the world. In our days this idea has become reality, with the invention of virtual reality technology. So what is virtual reality? This is three-dimensional, computer-generated environment which can be explored and interacted with by a person. People can interact with virtual reality using a headset of virtual reality and controlling elements, like adapters fixed on the body and sending action signals to the headset. It is very complex computer technology for application in many spheres of human activity. But where is the beginning of this incredible technology? And who was the inventor?

From the History of Virtual Reality

In the middle of 1950s, a cinematographer named Morton Heilig built a single user console called the “Sensorama” that included a stereoscopic display, fans, odor emitters, stereo-speakers and a moving chair. He also invented a head mounted television display designed to let a user watch television in 3-D. In the 1960s the Philco Corporation created the first proper precursor to the modern Head-Mounted Display. It was known as the Headsight and it had one video screen for each eye as well as head tracking ability. Bell Laboratories used a similar HMD for helicopter pilots. They linked HMDs to infrared cameras attached to the bottom of helicopters, which allowed pilots to have a clear field of view while flying in the dark. In 1965, a computer scientist named Ivan Sutherland envisioned what he called the “Ultimate Display”. Using this display, a person could look into a virtual world that would appear as real as the physical world the user lived in. In the 1990s we began to see virtual reality devices to which the public had an access. Although household ownership of cutting-edge virtual reality is still far out of reach [3].

Using of Virtual Reality

When people met virtual reality for the first time, a common reaction was to start imagining all different uses of this innovative technology. At present capabilities of virtual reality are amazing: from air simulators to robots performing important medical operations under the control of the human – that is the power of virtual reality we know! Today the virtual reality headsets and gadgets is the most perspective technology being used in different spheres of our life, for example: in information systems, science, games, medicine, architecture, engineering, neuroscience, animation, education etc. “You are limited by your imagination” – this principle works well with developing of absolutely new applications and gadgets for this technology, cause the capabilities of virtual reality are unlimited. There are too many ways of using this technology, but we’d like to tell you about the most helpful application for humanity – using in medicine.

Virtual Reality Application in Medicine

Virtual reality has been very useful in numerous disciplines of medicine: it is used to treat patients with phobias and as pain therapy to burn victims, and to simulate medical procedures. Using virtual reality in conjunction with traditional therapy for burn patients has proved to be very effective in relieving pain during wound care, bandage changing or staple removal. Pain perception is in psychological part and needs focus: when patients are introduced to virtual reality the attention is focused on the artificial environment, so the pain signals can be interpreted as painful or not. This important fact tells us, that we can trick the human mind and do the feelings of patients less painful [1].

Another treatment for patients with phobias is exposure therapy. The VR experiences providing of a controlled environment, in which patients could face their fears and even practice coping strategies, as well as breaking patterns of avoidance  all while in a setting that's private, safe, and easily stopped or repeated, depending on the circumstances [2].

For people who lost a limb, a common medical issue is phantom limb pain. For example, someone without an arm might feel as though he is clenching his fist very tightly, unable to relax. Frequently, the pain is more sharp than that, even excruciating. Past treatments have included mirror therapy, where the patient would look at a mirror image of the limb they still have, perhaps, and find relief as the brain syncs with the movements of the real and phantom limbs. It works like this: sensors pick up on the nerve inputs from the brain. In the game, patients use a virtual limb and must complete tasks. It helps them gain some control and learn, for example, how to relax that painfully clenched fist [2].

Robot Da Vinci

Robot Da Vinci is the most advanced platform for minimally invasive surgery. The control system of Da Vinci robot consists of ergonomic console, where the surgeon sits when operating; high-resolution 3-D vision and wristed instruments in intuitive motion control. Special attention must be paid to high definition 3-D vision system integrated with the ergonomic console. When the surgeon performs operation using Da Vinci robot, he sees a remarkably clear image of the surgical field. The surgeon sinks in the process of operation through this 3-D vision system. It is like the immersing into the virtual reality, but not an absolute, as with the headset of VR. There are four interactive robotic arms in a patient side: one with small camera sending signals to the 3-D vision system, and three hands with the surgeon tools. Performing medical operations by means of the Da Vinci robot has a lot of advantages, namely: lesser bleeding, small cut, less blood loss, faster rehabilitation. This is an important progress in a surgery!

Conclusion

Virtual reality is the unbelievably amazing and very perspective technology, that allows humanity to take an absolutely new look at the most usual spheres of activity. Virtual reality has been very useful in a few disciplines of medicine, in engineering and architecture, in education and special education for surgeons, in science and neuroscience and even in animation. We can see the intensive developing and evolution of the virtual reality. But can we envisage the evolution of that very complex technology? We believe that virtual reality will help us to solve many problems, but we should remember that any new technology can be used in evil for humanity, to the detriment of our physical and psychological health! That’s why, we need to understand it clearly and teach how to use virtual reality for the benefit of humanity!

 

Text 6

BLUE BRAIN PROJECT

Retrieved April 4, 2017 from https://www.ted.com

 

The brain. So mysterious, so incredible… It is like a universe in which crazy things happen every day, every moment. It helps us to perceive what is around us. It helps us to learn new things.

The Blue Brain mission is to build a detailed, realistic computer model of the human brain. And they have done, in the past four years. A proof of concept on a small part of the rodent brain, and with this proof of concept they are now scaling the project up to reach the human brain. In short, they could simulate certain number of neurons that would be enough for the brain of a clever cat.

The neocortex is the part of the brain thought to be responsible for higher functions such as conscious thought. The structure of the neocortex consists of several layers with a thickness of a credit card. The number of layers considered, determines the degree of development of thinking: for example, dogs have 4 layers and a man – 6. Vertically these layers are combined in a neural column of cortex.

This column was taken by the participants of the Blue Brain Project as a basis in the building of their models. You can imagine the neocortex as a grand piano with a million keys. In addition, each of the columns of the neocortex plays some note. When you stimulate it you obtain a symphony. But it's not just a symphony of perception. It's a symphony of your universe, your reality. Why are they doing this? There are three important reasons.

– the first is, it's essential for us to understand the human brain if we do want to get along in society and I think that it is a key step in evolution.

– the second reason is, scientists cannot keep doing animal experimentation forever, and we have to embody all our data and all our knowledge into a working model. It's like a Noah's Ark. It's like an archive.

– the third reason is that there are two billion people on the planet that are affected by mental disorder, and the drugs that are used today are largely empirical. I think that we can come up with very concrete solutions on how to treat disorders. Which abilities will give us the creation of artificial consciousness? There are a lot of interesting things. For example we will better understand the main mystery: how the brain gets the surrounding world. Tasks to artificial consciousness are not necessary. How clever the participants of the Blue Brain Project noticed: "we can't exactly articulate what is consciousness, so it is difficult even to talk about the problem of its modeling". However, they add: &the consciousness can be modeled by itself&.

In principle to build a model of the cortex of the human brain there are no obstacles. With increasing power of supercomputers and the development of algorithms of the model is quite likely seen the construction of a simulation of the entire cortex of the human brain within ten years. At last, everything depends on a suitable supercomputer, which could conduct the entire study.

Blue Gene is the second supercomputer from IBM, the model of the Blue Gene/P. If you decide to make themselves at home a virtual cat or hamsters, you will need a 415 teraflops supercomputer from IBM, which has 147 456 CPUs and 144 terabytes of RAM. So the next step is just to take these brain coordinates and to project them into perceptual space. And if they do that, they will be able to step inside the reality that is created by this machine, by this piece of the brain. So they think that the universe evolved a brain to see itself, which may be a first step in becoming aware of it. Hence you are at least partly convinced that it is not impossible to build a brain. I hope that in the future we will have such “super brain” to solve the global problems of humanity.

Text 7

Preda, M. D. & Vidali, V. (2017). Abstract Similarity Analysis. Electronic Notes in Theoretical Computer Science, 331, 87–99.

 

Abstract Similarity Analysis

Code similarity studies if two programs are similar or if one program is similar to a portion of another program (code containment). Code similarity is an important component of program analysis that finds application in many fields of computer science, such as reverse engineering of big collections of code fragments,

clone detection, identification of violations of the intellectual property of programs, malware detection, software maintenance, software forensics. In these applications, when comparing two fragments of code it is important to take into account changes due to code evolution, compiler optimization and post-compile obfuscation. These code changes give rise to fragments of code that are syntactically different while having the same intended behavior. This means that it is important to recognize modifications of the same program that are obtained though compiler optimization or code obfuscation as similar. There is necessary to abstract from syntactic changes and implementation details that do not alter the intended behavior of programs, namely that preserve to some extent the semantics of programs.

In order to consider both semantic meanings and syntactic patterns, existing tools for similarity analysis often employ mixed syntactic/symbolic and semantic representations of programs, as for example control flow graphs and dependency graphs that express the flow of control or the dependencies among program instructions. Recently, in  the authors investigate the use of symbolic finite automata(SFA) and their abstractions for the analysis of code similarity. SFAs have been introduced in as an extension of traditional finite state automata for modeling languages with a potential infinite alphabet. Transitions in a SFA are modeled as constraints interpreted in a given Boolean algebra, providing the semantic interpretation of constraints, and therefore the (potentially infinite) structural components of the language recognized (see . In  the authors show how SFAs can be used to represent both the syntax and the semantics of programs written in an arbitrary programming language, the idea is to label transitions with syntactic labels representing program instructions, while their interpretation is given by the semantics of such instructions. Thus, SFAs provide the ideal formal setting in order to treat within the same model, the abstraction of both the syntactic structure of programs and their intended semantics. A formal framework for the abstraction of syntactic and semantic properties of SFAs and therefore of programs turns out to be very useful in the understanding of existing similarity analysis tools, and in the development of similarity analysis tools based on semantic and syntactic properties of programs.

Text 8

Arora, B. (July 5, 2016). Exploring and analyzing Internet crimes and their behaviours. Perspectives in Science, 8, 540—542. Retrieved 6 December 2017 from https://core.ac.uk/download/pdf/82405095.pdf

 


Дата добавления: 2019-09-08; просмотров: 241; Мы поможем в написании вашей работы!

Поделиться с друзьями:






Мы поможем в написании ваших работ!