Cognitive Computing Assistants for Geoscientists: Automatic sentiment-tone analysis and similarity prediction.

In previous posts I have shown how ‘sentiment/tone/opinion’ can be automatically extracted from text to stimulate serendipity during search and analyse differences between Geological Formations. One of the findings was that words/sentences deemed as ‘negative’ by out of the box algorithms are not necessarily ‘negative’ in a geological sense (such as ‘hard’, ‘spill’, ‘old’, ‘fault’ and ‘thick’). Another finding was that typical entity extraction or even entity-entity association extractions and ontologies described in the geoscience literature, tend to leave behind valuable sentiment/tone context. Some of my recent academic research (to be presented at the Geological Society of America (GSA) this month) has focused on using machine learning from training sets to assess ‘tone’ in reports relating to working petroleum system elements in space and time. Effectively targeting the generic question that could apply to many domains in Geoscience and beyond:

“Do the right conditions exist for…...?”

Fig 1 shows sentiment geographically and Fig 2 by geological age.

Sentiment3

Figure 1 – Map plot. Comparison of positive v negative tone for various elements in different geographical locations

For example, the sentence, “…well northeast of Boomerang Hills has tested this pre-Andean trap concept successfully” will count as a ‘positive’ for the petroleum system element ‘Trap’. The sentence, “downward migration from Upper Cretaceous-sourced oils seems unlikely” will count as a ‘negative’ for the element Migration (SR Charge). Simply ‘counting’ entities is not enough. Consider the mention“The reservoir was absent”!!! Without contextual sentiment it is likely that misleading data will be presented.

This could provide another ‘opinion’ for scientists which could challenge preconceptions about what they may already believe. It could also stimulate learning events (clicking on the sentiment to view the mentions within sentences) that may prove academically and/or commercially valuable.

I have been experimenting with a custom ensemble (skip-gram, lexicon and Bayesian) algorithm taking into account word order, to detect this ‘positive’ and ‘negative’ sentiment (opinion and tone) around entities. Deep learning text embeddings would probably improve the ensemble accuracy (Araque et al 2017) but I have not used them here as I am testing  a very small dataset. See previous posts where I have used these techniques for different purposes on larger datasets.

The proportion of negative v positive instances can then be used to show relative trends (pie-charts in Fig 1) for each element and rolled up to higher level constructs. Figure 2 (multi-series bubble plot) shows the same data (from a very small sample of USGS reports again to illustrate the concept) focusing on the Source Rock/Charge element. This enables the data matrix to be plotted, where a Geological Age has been picked up by the algorithms as well as a location ‘mention’ in the text.

Sentiment_Time_Chart_Latest

Figure 2 – Geological Time charts plots for sentiment/tone mentions in context to a geographical area/basin. The larger the bubble, the greater the number of mentions.

The categorization is very coarse  (Basin level). Ideally it would be more useful to extract specific Intra-Basin features and/or geographical areas. Geological age of source rocks and charge/migration events are also conflated somewhat in this simplified picture, although they could easily be split out. Also, given enough document volumes, it should be possible to animate Figure’s 1 and 2 through time. For example, to show how sentiment/tone has changed each year from 1990 through to 2017.

By machine reading documents, papers and texts (too many in number to be realistically read by a person & harbouring patterns too subtle to picked up in any single document) a perspective can be obtained which may challenge individual biases and/or organizational dogma.

Public domain reports from the United States Geological Survey (USGS) were downloaded to test. Python & TextBlob scripts were used to convert the reports to text, identify mentions of Petroleum System Elements in the text and whether the context was ‘positive’ or ‘negative’ sentiment. Geo-referencing can be achieved through the centroid of the country, basin or geographical point of interest in question that is associated with that mention.

The algorithm addresses areas such as negation and avoids some of the problems with context free Bag of Words (BoW) models. For example “Source Rock maturity was not a problem” is a ‘positive’ context, despite having individual ‘negative’ words such as ‘not‘ and ‘problem‘. This is where traditional lexicon/taxonomy approaches (even using multiple word concepts) are likely to perform poorly.

Further work is ascertaining precision, recall and F1 accuracy scores and I’m currently working on a test set of over 2,000 examples of positive, negative and neutral sentiment about these entities extracted from public domain sources. Differentiating tone into various dimensions may also be useful. These may be promising techniques to augment geoscientists cognition supporting higher level thinking processes rather than just retrieval (remembering) of documents in traditional search applications.

Although all Geological Basins are unique, from Figure 2 it is obvious that some Basins/Areas may share common aspects. Utilising positive and negative tone by geological age, clustering techniques can be used on the data matrix to suggest analogues (including Intra-basin) just from the latent structure in text. No prior studies have been found which address this area and ascertain its usefulness. Fig 3 shows one such technique applied to positive/negative tone for the Source Rock/Charge element, with correlations and hierarchical clustering shown in a sequential coloured Heatmap (Metsalu and Vilo 2015). Rows and columns have been automatically re-ordered through clustering, the colours displayed are the values in the data matrix.

Clustering_latest

Figure 3 – Clustering (Correlation Clustering) Basins/Area and Geological Time for Source Rock/Charge by sentiment.

From Figure 3 it can be seen (Dendogram on left) that Sirte & Tamara are the two most similar (with the caveat we are using extremely limited data to illustrate the concept). It is relatively straightforward to see how in theory, this could be applied to a vast amount of sentiment data (more dimensions and Lithostratigraphy perhaps) potentially making more non-obvious connections where similar conditions exist, especially if numerical (integer/float) data is extracted from text and/or brought in from additional sources.

These techniques ‘mimic’ some simple human thought processes, hence the term ‘cognitive’. However, machines in my opinion do not read text “like people do”, despite technology marketing slogans. The Geoscientist may however, benefit from using some of these techniques which are freely available. After all, why would’nt you want to seek opinion from a crowd of somewhat independent scientists who have authored hundreds of thousands of reports? If it confirms your existing mental model then it’s good confirmatory supporting evidence. If it challenges it,  that does not mean you are wrong, but it just may stimulate a little more reflection and investigation. Subsequently, you may stick with what you thought. On the other hand, it may radically change it.

Keywords: Sentiment Analysis , Enterprise Search , Big Data , Text Analytics , Machine Learning , Cognitive Search , Insight Engines , Artificial Intelligence (AI) , Geology , Petroleum Systems , Oil and Gas , Geoinformatics

 

Advertisements

PhD Judged “Top 5” Internationally for Information Science.

Surprised and delighted to be informed that my PhD has been judged in the “Top 5″ Internationally in 2017 for Information Science in the ProQuest Doctoral Dissertation Award.

My thesis topic was Re-examining and re-conceptualising enterprise search and discovery. The Association for Information Science and Technology (ASIS&T) scope includes any PhD related to, “the production, discovery, recording, storage, representation, retrieval, presentation, manipulation, dissemination, use, and evaluation of information and on the tools and techniques associated with these processes.”

The judges comments include: “As far as I know, this is the first comprehensive and holistic work studying enterprise search; this is a pretty relevant theme and the contributions of the thesis are sizeable” and “Findings from this thesis have direct implications for the theories and practices in information science”.

A big thanks to my supervisory team of Professor Simon Burnett (Robert Gordon University) and Dr Laura Muir (Edinburgh Napier) along with everyone who has helped and encouraged me. It further motivates me to continue academic research in this area and to make further contributions to the discipline in what is a tremendously exciting time.

Information_Science

Applying sentiment analysis to oil & gas company reports.

Sentiment Analysis

I presented at the International Society of Knowledge Organization (ISKO) this week, sharing findings of an exploratory study. A Knowledge Organization System (KOS) was automatically applied to the annual company reports of four similar sized oil and gas companies to detect forward-looking strong and hesitant sentiment, in order to detect rhetoric, social phenomena and predict future business performance.

The “Discovery” part of “Enterprise Search & Discovery” is arguably downplayed in much of the existing academic and practitioner literature. In addition to finding what you know exists (or finding document ‘containers’ that you did not), there may be a case to embed various sentiment algorithms as standard in enterprise search & discovery technology deployments. Designing with ‘serendipity in mind’, this may move the intent of a deployment from one of pure retrieval, to one of pattern recognition. Where ‘trace fossils’ may exist in the information aggregate, not discernible from any single document.

The utilization of such algorithms to ‘compare’ and ‘contrast’ perhaps in a web part in the user interface, may move the enterprise search & discovery tool further up the Bloom’s Taxonomy pyramid, in assisting higher forms of thinking (along with delivering the surprising). It may not make sense for many queries made in general purpose ‘Google-like’ search tools deployed behind a company’s firewall, but detecting queries which do could be a useful undertaking. As described in a previous post many things can have a ‘sentiment’ which may act as a catalyst for further inquiry and potential new learnings. Whilst sentiment analysis is a useful technique when you have an a priori hypothesis in mind, it could well surface interesting phenomena even when you don’t.

Click here for link to presentation

Automated Forward-looking Sentiment Analysis, Search Engine Bias and Cognitive Search in Geoscience

Just a quick update on what I have been up to these past few hectic months as my last blog post was back in May this year. Below are some papers I have been working on over the summer and upcoming conferences I will be presenting at:

Golden Gate.JPG

Conducted some research recently in California (more on this in later posts)

Sentiment Analysis in organizational reports

I will be presenting on the 11th September in London at the ISKO conference in a collaboration with Laura Muir (Associate Professor of Information Systems at Edinburgh Napier University). The topic will be applying automated sentiment analysis to identify forward-looking sentiment(about the future) in company reports. This provides an indicator of how confident an organization feels about the future and may be dosed with rhetoric. We used biologically inspired word diversity algorithms which to our knowledge have not been used before to assess forward-looking sentiment. We also investigated predictive links to future financial performance and organizational phenomena such as the reaction to a crisis. I hope to share the presentation and paper shortly in the public domain. I think there are some very exciting findings and opportunities for companies to develop new knowledge as well as conduct further research: http://www.iskouk.org/content/isko-uk-conference-2017-knowledge-organization-whats-story

Search Engine Bias

Information Today published an extended article I wrote on search engine bias in their Sep/Oct 2017 edition here: http://www.infotoday.com/OnlineSearcher/Issue/7398-September-October-2017.shtml . It is an extension of the blog post I made earlier this year https://paulhcleverley.com/2017/04/24/are-search-algorithms-neutral/ including links to ‘fake news’ and bias within enterprise search & discovery technology. Information Today requires a subscription for the latest issues.

Cognitive Search Assistants in the Geosciences

Delighted that my paper on Cognitive Search Assistants in the Geosciences was accepted for the Annual Meeting of the Geological Society of America (GSA) in Seattle during October 2017. This builds on and further extends existing research I published previously on this site: https://paulhcleverley.com/2017/05/28/text-analytics-meets-geoscience/ , https://paulhcleverley.com/2016/08/01/teaching-machines-about-a-subject-like-oil-and-gas/ and work I presented a few years ago in Turkey https://paulhcleverley.com/2015/05/13/creating-sparks/ . These tools and techniques move beyond traditional deductive inference, to include both an inductive and abductive inference focus. I will be sharing the presentation and paper in the public domain later in the year.

 

TEXT ANALYTICS MEETS GEOSCIENCE

I presented some text analytics work at a recent GeoScienceWorld (GSW) meeting in New Mexico, USA. GSW is a not-for-profit cooperation of Geological Societies, Associations & Institutes to disseminate geoscience information. First, some information on the trip, then the analytics!

FIELD TRIP

The Geological Field Trip was to Santa Fe and Abiquiu areas approx. 7,000 Ft above sea level. To the west across the Rio Grande Rift Basin are the Jemez Mountains (a super volcano) and the town of Los Alamos (home of the Manhattan project). To the North is the Colorado Plateau and Ghost Ranch where over 100 articulated skeletons of the Triassic Theropod Dinosaur Coelophysis have been found (the state fossil of New Mexico). These would have stood about one metre tall at the hips and up to three metres long.

1 Coelophysis

The red cliffs at Chimney Rock contain Triassic deposits overlain unconformably by cross bedded Jurassic desert sandstones topped with white limestone and gypsum in places. The beautiful scenery of Chimney Rock can be seen in photo below:

2 Chimney Rock

The view from the top of Chimney Rock is even more breath-taking in the photo I took below.

 3 Top Chimney Rock

ANALYTICS

There has been a continuing shift from just Information Retrieval (IR) systems – a search box and ten blue links, to the search for patterns, through what is increasingly called ‘insight engines’ within the cognitive computing paradigm. After all, big data is about small patterns.

All of the work below is approximately five days work and shows what is possible using some of the techniques available today in a short space of time. I wrote scripts using Python and used OpenSource utilities, these included some new techniques not published before. For the analytic content, I used the Society of Economic Geology (SEG) text corpus (1905-2017) as an example focused on mining, mainly of heavy metals. This consists of over 6,800 articles, 4.3 million lines of text and 35 million words. Several examples of analytics techniques are shown below increasing in their sophistication.

  1. Statistical Word Co-occurrence

The results of counting the frequency of terms and their adjacent associations are called n-grams. The image below is a graph of nodes (unigrams) and edges (bigram associations) automatically generated from the SEG text corpus of journals.

Click here to view a video showing how the text graph can be explored

Terms that have high authority (many links) can clearly be seen, along with rarer terms with few associative links. This is one way to explore text in an easy and visual way which can be linked to queries to the documents or contexts in which those words or associations occur.

4 Graph

These displays can be complemented by word clouds, with the most frequent associations stripped down to reveal the ‘more interesting’. Previous research I performed with geoscientists indicated that more frequent associations were ‘relevant but not interesting’. So stripping away the most frequent may be desirable, hyperlinking every word so scientists can drill down into the articles and sentences in which the associations are mentioned. The example below is for the search query ‘precambrian’.

x Word cloud

Just as the Google n-gram viewer allows someone to explore word usage through time, similar approaches can be taken with journals. The image below shows the trends of some common words in the SEG text corpus over the past twenty years. The y-axis is normalized relative word frequency (compared to total number of words used in that journal in that year).

6 Word Frequency

For example, from the image above it is plain to see that the popularity of the terms ‘gold’ and ‘hydrothermal’ (frequency of occurrence) have increased over the past twenty years, whilst the terms ‘manganese’ and ‘metamorphic’ have decreased. The increase in popularity of ‘gold’ has been theorized as possibly related to gold price which has also been plotted on this chart!

  1. Simple Entity Extraction

Extracting entities from text and associating them to a spatial context and geological time period has been of increasing interest to both academia and practice. The NSF EarthCube GeoDeepDive Cyber Infrastructure is one such example with some fascinating findings related to stromatolite distribution and sea water chemistry for example driven by patterns in text.

The example below shows the frequency of mentions (histogram in green) in SEG journal articles of geological periods (including their constituent sub-divisions). A knowledge representation (taxonomy) has been applied to the text in order to surface a pattern. This would appear to support a proposition that over more than a hundred years, the focus for mining geologists has been the Pre-Cambrian and Tertiary (Neogene and Paleocene) periods (denoted by the acronyms ‘PC’ and ‘T’ respectively) on the y-axis. The Silurian period appears to have been of least interest.

7 Entity Extraction

Plotting the world-wide distribution of copper ore by Geological age (orange line) as a form of ‘control’, supports the theory that patterns in journal text may surface ‘real’ trends and phenomena of interest.

  1. Numerical Data Extraction

Another relatively common technique is to extract numerical integer and float data. The chart below shows the results of automatically extracting integer and float data in association with the mnemonic ‘ppm’ (parts per million), plotting where it can be associated to a chemical element. The ppm data is on the y-axis (logarithmic), with mentions on the x-axis (1,709 were found in total). This could be turned into a hyper-linkable user interface, taking the user to the sentence/paragraph in question for each data point. This type of extraction is quite trivial although potentially under-used by organizations despite much of these data not necessarily being stored in structured databases.

8 PPM extraction

Another common technique is entity-entity matrices, showing how common two entities occur together in the same sentence or equivalent semantic text unit. The example below shows lithology and minerals for the SEG corpus.

x EAM

The associations are clustered using least squares to group similar lithology and mineral associations. You may just be able to pick out ‘Diamond’ on the middle right and its strongest association to Breccia and Conglomerate. These displays may reveal surprising associations worth of further exploration and are used extensively in biomedical for tasks such as gene discovery.

  1. Geo-Sentiment

Looking at individual geological formation names as they appear in text, it may be possible to derive ‘sentiment’ and ‘subjectivity’ of the formation. Using Part of Speech (POS) tagging, nouns that occur before the phrase ‘Formation’ or ‘Fm’ for example, can be extracted.

The cross-plot below shows some Formation names that appear around the search query term ‘leaching’. Polarity is on the x-axis, denoting how the Formation is perceived, negative (-1) versus a positive (+1) light. This is achieved by analysing the words (using Bayesian Statistical algorithms) that co-occur with the geological Formation mentions in text. Simplifying, terms such as ‘good’, ‘surprising’ and ‘abundance’ area deemed as ‘positive’ whereas terms such as ‘poor’, ‘error’ and ‘problem’ are deemed as negative. On the y-axis is subjectivity, from objective (0) to subjective (1). Terms such as ‘strongly suggest’ and ‘by far’ being indicative of subjective views. Standard sentiment algorithms cannot be used with accuracy on geoscience content, as the everyday terms ‘old’, ‘fault’ and ‘thick’ for example, which can be used in social media to denote negative views, are not negative in a geoscience sense!

10 sentiment

From these data, the Citronelle Formation appears in a negative light that may stimulate the scientist to investigate the sentences (context) which may lead to a learning event. Conversely, the high ‘subjectivity’ of the Popovich Formation may also trigger curiosity to understand the context which may lead to a re-interpretation.

  1. Automatic Geo-coding

Geo-referencing journal articles is not a new technique. However, in many cases it is the entire journal article (or just images within the article) that is referenced. In essence it is a summary of ‘aboutness’. The map below shows ‘mentions’ of the search query term concept ‘precambrian’ in the full text (body text) of all articles in the SEG corpus where they can be automatically geo-located. Comparing to techniques that only use keywords and/or abstracts of the journal article (the ‘information container’), there is a 200% enhancement (increase) of geo-locations. Clicking on the locations to show the ‘mentions’, the sentence or paragraphs in which the query term concept is mentioned, may yield insights that simply geo-locating entire journal articles cannot.

11 Geocoding

Instead of colour coding the frequency of occurrence by colour, bubble plots can be used, where the size of the bubble relates to the frequency of mention. This can also be combined with external data to the text corpus. An example is shown below integrating with the surface geology on the United States using US Geological Survey (USGS) WMS GIS spatial data.

12 Geocoding USGS

More granular geo-coding is simply a case of adding in more specific lookup lists for latitude and longitude of any entity.

13 Geocoding Mine

  1. Topics

Another common technique to ‘summarize’ the essence of what ‘lies beneath’ in text, relies on a range of techniques from complex word co-occurrence patterns, Principal Component Analysis (PCA) to Eigen Values and Vectors. The image below shows the clusters of topics for the search query ‘leaching’ in the SEG text corpus. These techniques can be applied at any level of granularity, on an abstract, a single article, a whole corpus or as a delta between journals or corpora. Topic modelling is typically applied longitudinally (through time) to surface changes in the intent behind text.

14 Topic Models

 

  1. Mathematical Word Vectors and Hypothesis Testing

The word co-occurrence patterns of any entity can be converted into a mathematical vector and the similarity compared with one another. From literature reviews, these techniques have been applied more sparsely/largely non-existent within the geoscience discipline compared to simple entity extraction and association.

The cross-plot below shows geological periods in the SEG corpus plotted to their similarity to the word vector of ‘volcanics’ on the x-axis and ‘limestone’ on the y-axis. In the bottom right, the Pre-Cambrian Archean period (2-4 Billion years ago largely before life on earth) is very similar to ‘volcanics’ and not ‘limestone’ which is what you would expect. Conversely, the Mississippian (top middle) is very similar to ‘limestone’ and not ‘volcanics’ which is what you would expect as sea level was very high with warm shallow seas. So again, this supports the theory that word vectors from text can surface real word patterns that make sense. Perhaps they can also reveal what we don’t yet know.

15 word vectors

A variation of this technique which it is believed may have never been tried before in the Geosciences, is combining data from a database, with word vector information. In the cross-plot below, US states (e.g. Florida, Wyoming, Oregon) are plotted by their annual rainfall on the y-axis (from the National Oceanic and Atmospheric Administration (NOAA) database) and their similarity to the word vector ‘Arsenic’ in the SEG corpus on the x-axis. A weak correlation (R2=0.26) is found, implying more similarity to the word vector ‘Arsenic’ with decreasing rainfall. Simplistically, this could be due to more arid environments (less rainfall) leading to higher Ph conditions with Arsenic more likely to mobilise from the underlying geology into groundwater and aquifers.

16 Word Vectors Arsenic

This could point to the potential value of combining word vector similarities from text with traditional measured data stored in structured databases. The whole may be greater than the sum of the parts.

A final example integrates data from the US National Cancer Database (CNC), Alzheimer’s Association and again, text vectors from the SEG corpus. The average cancer mortality rate per US state (per 100,000 people) is plotted on the x-axis, the average Alzheimer’s mortality rate per US state is plotted on the y-axis. The similarity of US state word vectors to the heavy metal ‘Cadmium’ word vector is shown by the colour and size of marker. The more similar, the larger the circle. Those above average similarity in the sample are coloured orange, below average are coloured blue. There is no statistically significant correlation and even if there was, correlation is of course not causation. There are many demographic and socio-economic factors at play in a complex system. However, these techniques may be useful in surfacing patterns that warrant further investigation or hypothesis testing.

17 Word Vectors Cadmium

 

  1. Automated Discovery

The final example compares the linkage between the word vector of every concept in the corpus, with the word vector of every other concept in the corpus and their similarity to the word vectors of a hypothesized theme. In the example below, the theme are elements typically associated with geogenesis (natural) contamination in groundwater (e.g. Aluminium, Iron, Copper, Mercury, Lead).

A new simple ratio has been developed (Cleverley 2017) by combining linear regression with a scaling factor to represent the individual similarity of the concept(s) to the theme, to surface potentially the ‘unusual’ associations which may warrant further discovery. In the run below, over 150Million word vector combinations were tested by an automated algorithm. This took four hours on a standard laptop.

 18 Automated discovery

19 Equation

For example, ‘Argon dating’ and ‘Feldspar Chlorite’ as individual concepts, do not have high similarity to the theme. However, as an association, they have a disproportionately higher correlation than one would expect, which may warrant further exploration to identify a causal mechanism.

Just as Swanson (1988) manually identified (inferred) a link between magnesium deficiency and migranes, that was not present in any single article, but they shared similar concepts, these automated techniques could highlight new associations. This could lead to new knowledge and ultimately, new scientific discoveries that are hidden amongst our text in plain sight.

Knowledge is socially constructed and different text copora will likely lead to different word vectors for the same concepts depending on the sub-discipline and nature of the text. These differences may also surface clues to new phenomena of interest.

Based on literature reviews, the use of word vector similarities of entities with external data is potentially under-utilized in the geosciences. Future work will most likely expand the research to apply to much larger quantities of journals and further develop automated approaches. Questions, comments and ideas are always welcome, feel free to contact me on the email below.

Paul Cleverley PhD

Researcher

Robert Gordon University

Email: p.h.cleverley@rgu.ac.uk

Blog: www.paulhcleverley.com

A PDF of this article is available By Clicking <Here>

References

20 - References

 

Are Search Algorithms Neutral?

cards

Enterprise search and discovery algorithms are often perceived as objective and neutral helping us overcome our own biases, even if they don’t always produce what we want or need. The Cognitive Computing narrative is one where machines read vast amount of text to compensate for human cognitive bias and potential organizational dogma. The mantra is not to produce the ‘right answer’, but the ‘best available’. But can search algorithms be truly objective and unbiased themselves?

Search Engine Bias

Various phenomena that involve the manipulation of search engine results are typically referred to as search engine bias. Just like bias however, is not easy to define and can be hard to detect. What is the difference between bias and a point of view? Take the incompatible statements, “It is a truism that every author is biased in favour of the claim he is making.”, “Bias and prejudice are forms of error”.

There is avoidable bias (such as promoting a narrow partisan view when a broader non-partisan view ought to be taken), there is technical bias (such as related to sampling) and unavoidable bias (such as news reporting). This is not to criticise news reporting, but to guard against any view that reporting can be absolutely neutral. It is proposed that many aspects of search engine ranking is an unavoidable bias, the danger (just like news reporting) would be to view it as a neutral rendering of data. It may be better to talk in terms of pre-dispositions.

Search Engine Optimization

Search ranking involves automated and human interventions according to some design parameter choices (Sometimes weightings are called ‘bias values’). Some content will be promoted and other content marginalized. Search Engine Optimization (SEO) is an iterative process to maintain/improve the search result quality that may see some content rise and others fall as a result of changes. Some scholars have sought to measure bias of web search engines by their deviation from a relative ‘norm’ of their peers. In previous articles and research papers, I have discussed the positive elements of using search algorithms designed to specifically stimulate the unexpected, insightful and valuable: nudging search engines into the role of creative assistant, rather than just a time saver. This article looks at the predispositions (bias) that may be inherent in search algorithms.

How we come to know things

Internet search engines are ubiquitous, they have become an epistemology, ‘how we come to know things’ which raises ethical issues. This has prompted further scrutiny, to understand to what extent search algorithms and human interventions are truly unbiased. Nevertheless, some people argue that a search algorithm can never be neutral. Behind every algorithm is effectively a person, organization or society which created it, that is likely to display biases of some form, so any rendering by search engines is value-laden not value-free.

Knowledge Representations

Algorithms themselves often incorporate query rules and Knowledge Organization Systems (KOS) such as taxonomies or ontologies. These KOS are ‘one version’ of reality and whilst they can enhance information discovery, these schemas may also reinforce dogma and potentially blind us to new discoveries. Whilst some indicate new cognitive computing techniques allow us to evaluate without bias, It may be a falsehood to say automated systems lack any bias.

Social Voting

Another aspect utilized in algorithms is explicit social voting (within sub-cultures and societies), creating a form of ‘standardization’. The more an item is viewed (clickthrough) in search results in context to a specific search query, the more popular it is perceived. Some information may therefore be ‘censored’ through its obscurity, where relevance is not determined by its usefulness, but by its popularity, which may reinforce existing power structures and stereotypes. Items at the top of any search results list may exhibit ‘presentation bias’. Once some items get a high search rank (85-95% of people never click on page 2 of search engines), a self-fulfilling prophesy may come into effect (the Matthew Effect), the rich get richer and the poor get poorer.

Personalization

Some algorithms also make use of user context (such as location and previous searches), a form of ‘personalization’. Some scholars feel that personalization has/will mitigate search engine search results ranking bias, producing tailored results. At the same time, individually tailored results unique to each person may place the searcher in an over-personalised filter bubble.

Technical Sample Bias

In addition to these ‘standardized’ and ‘personalized’ aspects of algorithms, there is technical bias related to the sample in the search index corpus. If the text within the search engine corpus is itself skewed then you will have a classic case of sampling bias. This may explain why the ‘Bing predicts’ big data algorithm that followed the United Kingdom’s referendum on the European Union (EU) predicted a vote to remain by 55% on June 23rd 2016. Social media trends may not reflect everyone’s opinions, the corpus may be prejudiced. Like any models, they can be true until they are not. Significant failures in Google Flu trends algorithms is another example, with some stating that ‘algorithm accountability’ may emerge to be one of the biggest problems of our time.

Human Judges

In addition to automated rules and signs, search algorithms also undergo constant evaluation and tweaking by people in the background, with ratings generated by people judging how ‘good’ results are. It is therefore unlikely for search results to be completely untampered with in some way.

Power to Influence Elections

Taking a more sinister turn, studies have shown that manipulation of search result ranking in Google could potentially affect people’s attitudes towards health risk, without people being aware they were being fed biased information. Some scholars provide evidence that manipulation of search engine algorithms could even influence democracy in country elections. Evidence appears to exist for search engines biasing results both towards the left and right during elections, although (arguably) big data may make it easier to find evidence to support any particular point of view you wish to take.

Bias in Enterprise Search

Recent research involving three separate enterprise search technology/deployments, points to algorithmic bias also existing behind an organizations firewall within enterprise search and discovery technology. For example, enterprise search technology from at least one software vendor, had default ‘factory shipped’ search ranking configuration parameters, that gave preference (ranking boosts) to its own document formats, above that of the formats of its competitors.

Other examples in the enterprise include a bias in some ‘factory shipped’ enterprise search algorithms towards their country of origin. For example, in one search engine that automatically geo-references search results to display on a map, any document containing the phrase ‘west coast’ was assumed to be about California. In another deployment that had indexed third party information, algorithms were designed to favour small information providers rather than large ones, simply for performance reasons; a case perhaps of an enterprise search algorithm making arbitrary ‘editorial’ choices.

It is commonplace in Enterprise search deployments for engineers with the best intentions to over-ride automatically generated organic search results using promoted results (often termed best bets) and tweak results through user defined ‘gold standard’ test sets and search log mining hunting for better search results quality. Some search engine practitioners state engineers will have no idea what relevant results are, so involving users/customers to rate results is essential. Some organizations that have performed these types of search evaluations and tuning with test sets of documents, made comments during enterprise search conferences, that what one expert user feels is the optimal set of results for a search term, can often be significantly different to another expert in the enterprise.

Filtering of results is also commonplace within enterprise search deployments and SharePoint search, to remove/hide results deemed undesirable, inappropriate or not useful, using negative filters of ‘dirty words’. For example not showing results where the word ‘conference’ is mentioned. It would be an interesting question (dilemma?) if management in an organization ever asked their enterprise search team (using the latest machine learning techniques) to ‘hide’ search results for any content it felt portrayed the company in a bad light – such as comments made on internal company enterprise social blogs by staff about its HR policies. Some may feel this is acceptable information governance practice, others may feel it is unethical practice.

Conclusion

For a variety of reasons (such as complexity and trade secrets) it may not be possible to ever fully understand what enterprise search algorithms are doing and the intent behind them, although some standards exist (such as Okapi BM25). Due to this opacity, a significant amount of trust is therefore placed in the hands of those that design and deploy search algorithms. Adopting a position of unconditional faith in algorithms may pose many risks. Increasing awareness of what biases already exist (through accident or design) or could exist in the future, might be a prudent step to take.

As we are all predisposed to certain views, it seems likely that search engines will be as well.

Paul H. Cleverley

Researcher, Robert Gordon University