Artificial intelligence and libraries

This section deals with the expansion of AI and the potential implications for libraries and library technology. As a rapidly emerging area the reach of AI in relation to how libraries operate and the technology they employ is likely to grow significantly. Examples of current AI applications currently used in libraries include the use of chatbots for online enquiry triaging and the introduction of AI powered modules to library content platforms.

Understanding AI

To understand where AI should be used and will be most successful, one must understand what AI really is. AI, or machine learning, refers to a broad set of algorithms that can solve a specific set of problems, if trained properly.

The success of artificial intelligence depends on data.

The AI bucket consists of:

  • Big data
  • Analytics
  • Machine learning
  • Natural language processing
  • Data visualisation
  • Decision logic

Reproduced from the blog by Nick Ismail [ Information Age [blog] 23 April 2018 ]

Components of AI

A composite including:

  • Big data
  • Analytics
  • Machine learning
  • Natural language processing
  • Data visualisation
  • Decision logic

Smith, A. (2016). Big Data Technology, Evolving Knowledge Skills and Emerging Roles. Legal Information Management, 16(4), 219-224.

Common AI terms

Taken from:

AI in the UK: ready, willing and able? HOUSE OF LORDS Select Committee on Artificial Intelligence. Report of Session 2017–19 HL Paper 100 16 April 2018.

Algorithm

A series of instructions for performing a calculation or solving a problem, especially with a computer. They form the basis for everything a computer can do, and are therefore a fundamental aspect of all AI systems.

Expert system

A computer system that mimics the decision-making ability of a human expert by following pre-programmed rules, such as ‘if this occurs, then do that’. These systems fuelled much of the earlier excitement surrounding AI in the 1980s, but have since become less fashionable, particularly with the rise of neural networks.

Machine learning

One particular form of AI, which gives computers the ability to learn from and improve with experience, without being explicitly programmed. When provided with sufficient data, a machine learning algorithm can learn to make predictions or solve problems, such as identifying objects in pictures or winning at particular games, for example.

Neural network

Also known as an artificial neural network, this is a type of machine learning loosely inspired by the structure of the human brain. A neural network is composed of simple processing nodes, or ‘artificial neurons’, which are connected to one another in layers. Each node will receive data from several nodes ‘above ’it, and give data to several nodes ‘below’ it. Nodes attach a ‘weight’ to the data they receive, and attribute a value to that data. If the data does not pass a certain threshold, it is not passed on to another node. The weights and thresholds of the nodes are adjusted when the algorithm is trained until similar data input results in consistent outputs.

Deep learning

A more recent variation of neural networks, which uses many layers of artificial neurons to solve more difficult problems. Its popularity as a technique increased significantly from the mid-2000s onwards, as it is behind much of the wider interest in AI today. It is often used to classify information from images, text or sound

See the useful resources section for further information on AI from a range of authoritative sources.


Table of contents


Back to top

Content on HELibTech is licensed under CC0 1.0 Universal. Please refer to re-use permissions on third party content linked to by HELibTech.