Abstract

Digital computers used nowadays and the brain are essentially two different things. Most of the operations a computer does, are reduced to: calculate, transfer and compare data. On the other hand, most of the operations the brain does well, are characterized by patterns processing. This generalization could be surprising at the beginning, but really, in activities so diverse as to maintain a conversation, recognize a face and act in the traffic, the brain behaves on constantly changing patterns of stimuli.

This activity (pattern recognizing) requires the capacity to compare simultaneously large amounts of input information and to generate a categorical output. Moreover, it requires the ability to obtain a reasonable answer for a noisy or incomplete input. For example, listening only to some musical notes, a complete sensorial experience could be remembered, including scenes, sounds and fragrances.

The "ARTIFICIAL NEURAL NETWORKS" technology, or simply "ANN", can be considered as a mathematical pattermaking method inspired on the nervous biological system structure, where the key aspect of the process is its parallel inherent computation.

To show the optimization of the "ANN BAM Type", a Diagnostic System and a Mechanical Pumping Systems Control was developed through the dynamic charts or dynagrams (QUALITATIVE ANALYSIS), which are just a resulting graphic representation of the work developed by these Artificial Lifting Mechanisms used by the petroleum industry in the different operational areas for the hydrocarbon production, whereby the conditions of the oil well pump, suction rods or dowels and even the balance been itself could be shown.

The accuracy in the dynagraphic charts diagnostic through the optimization of the "ANN BAM Type" is about 95 per cent, with respect to the knowledge stored in the network during its testing period, in real time and at high data rates. Thus transforming this optimization in a "new" alternative against the conventional methods for the diagnostic and control of process.

Introduction

The world where we live in, surrounds us in the form of different images: phenomena, objects and situations, and when we receive them, we obligatorily divide them in groups of alike images, that is, we classify them based on a past experience, because when we see, we know that before us there is a ship, a cup, a butterfly, etc. Then, why can it not be expected that machines capable of making 100 millions operations in floating point per second, be able to understand the meaning of shapes in visual images, or even to distinguish between different kinds of similar objects. Why can not that same machine learn from experience instead of repeating for ever an explicit set of instructions.

If the only tool we have is a sequential computer, then, in a natural manner, we will try to solve all the problems in terms of sequencial algorithms. Nevertheless, there are many problems which are not adequate to be solved in this manner, thus spending great efforts to develop sophisticated algorithms, even without getting an admissible solution.

As an example of the difficulties we find when we try to carry out an intrinsically parallel operation in a sequential computer, let us consider the problem of recognition of visual images.

Since our conventional computers are not reasonably adequate for this type of problems, certain characteristics of the brain physiology have been taken as the basis for new processing models. Therefore, these techniques have been called connecting models, or simply artificial neural networks.

The importance of these models is that all of them show useful trends in order to learn, recognize and apply relations between objects belonging to the real world. Nevertheless they have certain limitations:

  • Slow processing rate

  • Excessive use of memory

P. 79

This content is only available via PDF.
You can access this article if you purchase or spend a download.