The current surplus of unstructured, noisy open-source data provides a significant opportunity to assist in intelligence decision making.
BY ALEXANDER LONG (PHD STUDENT, UNSW)
The current surplus of unstructured, noisy open-source data provides a significant opportunity to assist in intelligence decision making. The scale of this data however, has resulted in manual analysis by human experts becoming increasingly infeasible. To combat this, automated systems have been developed to reduce the amount of superfluous and redundant information viewed by human analysts, distilling noisy, raw data into high-quality, easily understood information. Such systems are effectively information funnels - taking large amounts of data in at one end and passing only important information onto a human for review.
These traditional methods are limited in that they rely on hand-crafted rules and have no capability to understand the relations or meaning in the data they are processing. Because of this, and the scale at which the algorithms operate, important information continues to pass through such systems, never being flagged for human review. Humans are naturally more suited to this task because of our ability to learn. After completing an Information Extraction task several times, an analyst will have built up intuition about the process, and will know where to look given a specific type of directive or query. Until recently, replicating this kind of complex pattern recognition was beyond the scope of computers, however new breakthroughs in AI technology have now brought this problem into the realm of feasibility. My research focuses on advancing a specific class of these algorithms, termed `Deep Reinforcement Learning' (Deep-RL), in order to bring the power of autonomous decision making to bear on the problem of Information Extraction.
These algorithms allow an artificial system to make decisions in situations of uncertainty, a key component of human intelligence. Deep-RL is also extremely general and has been applied to problems in logistics, robotics, telecommunications, energy management, game playing and finance. Recently, Deep-RL was used to beat the best human in the world at the traditional Korean board-game Go, as well as learn to beat every other chess engine (and consequently human) in the world within four hours. Deep-RL is uniquely suited to Information Extraction, as unlike other forms of Machine Learning, Deep-RL tackles sequential problems. This is in contrast to traditional methods which operate on provided input-output examples and learn the general interaction between the pairs. A good example is the game of chess, where a player is required to make many moves before they can receive explicit feedback (in the form of a win or a loss). Because accurate Information Extraction is an inherently sequential process, Deep-RL is much more applicable than other methods.
In practice, Deep-RL commonly makes use of large neural networks to increase the expressive power of the system; this is the "Deep" in Deep-RL. Given this combination, and with sufficient computational power, an autonomous system can be implemented that learns where to look given it's current knowledge about a situation. By knowing where to look, critical information can be identified and passed to a human analyst quicker, more robustly, and more accurately than existing methods. Ultimately, this results in national security threats being detected earlier, and a reduction in the probability of such threats ever occurring.