Making Sense of Machine Learning in Digital Farming

Digital farming offers the promise of improvements in crop yields (higher revenues) and more precise knowledge and application of nutrients, crop protection chemicals, and water applications (lower costs).  These are just some of the many potential benefits. However, getting there has numerous issues; many of these issues are hidden and inter-related. The issue of algorithms, those analytic approaches that we are automating, is one of these, and it will be explored in this article.

In the world at large, there is intense discussion of Machine Learning (ML) and Artificial Intelligence (AI), but with little real understanding of what these involve. There are many conversations about what ML and AI will do to transform our lives but little real discussion of what they are and where they originate. In the systems’ world, these discussions are all about data and the promise that the analytics offers in term of our insights into this data world: and everything around us, and beyond us.

Advertisement

The emergence of digital farming is, in part, prompted by the advancements in remote sensing; also to be thought of as advanced scouting. Remote sensing increases the human ability to scout the agricultural instance with increased coverage, more frequent assessments, and improved resolution and change detection. Of equal weight in this emergence of remote sensing in digital farming are the developments in ML and AI.  The following is a brief statement of ML and AI, with emphasis on ML, and a look at some of the considerations to be held along the way.

ML goes by many different names and is often confused with and used interchangeably with AI. The two concepts of ML and AI are distinct concepts. They are as much alike, and different, as precision agriculture and digital farming.  ML takes on the role of learning; while, AI delves into the concept and processes of thinking and problem solving. Ultimately these explorations in ML and AI lead to our understanding of consciousness. One may lead to the other, but many people who study these concepts would differ in this statement of outcome. There does appear that there might be a Master Algorithm (MA) that is on the path to ML and AI, and from both of these to consciousness, self awareness. Below, I will present some of the early concepts used in ML; these are the ones being explored for their potential as a learning tool in the MA quest.

MORE BY MICHAEL R. COLLINS

The goal of ML and AI is to teach computers to do what humans currently do better, and learning is arguably the most important of these things: without it, no computer can keep up with a human for long. Demonstrated ML goes by some of the following names: pattern recognition, statistical modeling, data mining, knowledge discovery, predictive analytics, data science, adaptive systems, self-organizing systems. Each of these is used by different communities for specific tasks, and each has specific associations/applications.

Top Articles
Varda and TrueFootprint Partner to Support Smallholder Farmers with Field Data and EUDR Compliance

At its core, ML is about prediction: predicting what we want, the results of our action, how to achieve our goals, and how the world will change. You can’t control what you cannot model and don’t understand, and that is why we need to understand the tools we are attempting to use.

In this quest for the ML discussion there is a separate subject that also must be understood. That subject is the context of the learning problem. This subject is called architecture. Architecture will not be covered here and would need a much more expansive treatment than is possible in this paper. A summary of the importance of this subject can be clearly stated. That statement is that every system has an architecture. Intended, or unintended, an architecture exists. Our job is to understand the architecture so that we can manage it. ML and AI are the processes, or algorithms, whereby we predict the outcomes of and manage the systems represented.

For ML, hundreds of new learning algorithms are invented every year, but they’re all based on the same few conceptual approaches. These few concepts are:

  1. Symbolists view learning as the inverse of deduction and take ideas from philosophy, psychology, and logic; most frequently using inverse deduction.
  2. Connectionists reverse engineer the brain and are inspired by neuroscience and physics; most recognizable in back propagation techniques.
  3. Evolutionaries simulate evolution on the computer and draw on genetic and evolutionary biology; using the concept of genetic programming.
  4. Bayesians believe learning is a form of probabilistic inference and has its roots in statistics. Bayesian inference, discovered in the 1700s, is well proven, and traces to some of the more important discoveries and applications of our time.
  5. Analogizers learn by extrapolating from similarity judgments and are influenced by psychology and mathematical optimization.

On selecting which algorithm concept to use, there is no silver bullet. The quest for the universal ML algorithm is ultimately the search for the MA that makes these concepts into a learning machine.

The selection, at least into the foreseeable future, is a combination of these concepts with applicability based upon the architecture or context and the problem to be solved. In fact, today, it would be anticipated that all of the algorithmic concepts would be applicable to system level problems at different points in the architecture, in time, and in the state of the issue. When a unifying master algorithm emerges it will replace all of these.

The real genesis of these concepts has their origins in the exploration of the emergent field of artificial intelligence; and, ultimately, in consciousness. Each of these explorations plays in the development of the concept of artificial intelligence, and ultimately consciousness, and because of these human endeavors many users are trying to model these concepts in novel ways to begin the abstraction leading to discovery. These combinations give us great insight to the analysis concepts and also to the entity being modeled.

We live in a world of algorithms. The simplest algorithm is the flip of a binary switch; this is the current world of digital electronics. Emergent concepts now beginning to be explored are in the world of quantum computing where there are multiple states, versus two digital states. There is also some growing belief that many of the nuanced and most complex behaviors in physics, biology, weather, etc. are only solvable in the quantum world with the application of imaginary numbers. One can begin to picture how the exploitation of quantum insight and the application of advance imaginary math might also provide an avenue of exploration for the MA and even consciousness.

As this subject matures, it appears we might approach the concept of consciousness; if this is even possible. This world, also in search of a master algorithm, might offer a glimpse of new algorithms as this subject emerges and become apparent in the nuanced and complex world of our need for more complete decision models.

Before leaving this subject, we must also realize that achieving the application of a master algorithm will require we address the required, unimaginable plethora of data. The storage and retrieval of data in these concepts require that there is new understandings in storage, communications, and a break down in the barriers imposed by the world’s need to force all data to a relational model. There are, at least, four conceptual data modeling concepts: relational, object, key value, and graph. As we move into the quantum world there may be others. We should already be able to see these four in the alignment and implementation of these to the five learning algorithm concepts.

With all of the work in the last three or four decades on architectures, it is becoming apparent that there is an intrinsic structure, or form, to the descriptions and linkage of the elements of architecture. This fractal-like form can be, and must be, available for reuse, modification, linkage, recursion, expansion, and replication across the operational, material, software, and human views of the system. These fractal-like forms provide the context for all analysis, linkages (internally and externally between every element and across views between individual elements), and application of algorithms. These fractal-like forms are the syntax of language, the basis for a structuring of knowledge, and the fabric of system models, the architectures.

These fractal-like forms are not in the frameworks; they are transcendent to the frameworks. Again, all systems have an architecture; intended or unintended, an architecture exists and must be managed. Intrinsic to a framework are some of the elements and models that describe the architecture. All of the elements and model must be revealed, and this must be done recursively. In order to manage the architecture, the whole architecture must be revealed, modeled, and analyzed.  There is an overwhelming amount of data, a plethora of many learning algorithms, and a context across all of the taxonomies and ontologies of architecture that all must be solved simultaneously.

An algorithm is not just any set of instructions; there must be a precision and lack of ambiguity in this context such that it can be executed by a computer …  if a theory cannot be expressed as an algorithm and is not implemented in a formal structure, it’s not entirely rigorous.

Over time, computer scientists build on each other’s work and invent algorithms for new things (automations).  Algorithms combine with other algorithms to use the results of other algorithms (orchestrations), in turn producing results for other algorithms. This ultimately leads to an increase in complexity along multiple aspects of the architecture. These complexities include: scale, space, time, and human. These must be accounted for in the technologies selections, implementations of algorithms, and architecture design of any modeling initiative.

0

Leave a Reply