Decision Making
2104 papers with code • 1 benchmarks • 38 datasets
Decision Making is a complex task that involves analyzing data (of different level of abstraction) from disparate sources and with different levels of certainty, merging the information by weighing in on some data source more than other, and arriving at a conclusion by exploring all possible alternatives.
Source: Complex Events Recognition under Uncertainty in a Sensor Network
Libraries
Use these libraries to find Decision Making models and implementationsMost implemented papers
An Introduction to Deep Reinforcement Learning
Deep reinforcement learning is the combination of reinforcement learning (RL) and deep learning.
Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems
We then provide a mechanism to generate the smallest set of changes that will improve an individual's outcome.
QPLEX: Duplex Dueling Multi-Agent Q-Learning
This paper presents a novel MARL approach, called duPLEX dueling multi-agent Q-learning (QPLEX), which takes a duplex dueling network architecture to factorize the joint value function.
IQ-Learn: Inverse soft-Q Learning for Imitation
In many sequential decision-making problems (e. g., robotics control, game playing, sequential prediction), human or expert data is available containing useful information about the task.
ReAct: Synergizing Reasoning and Acting in Language Models
While large language models (LLMs) have demonstrated impressive capabilities across tasks in language understanding and interactive decision making, their abilities for reasoning (e. g. chain-of-thought prompting) and acting (e. g. action plan generation) have primarily been studied as separate topics.
Thinking Fast and Slow with Deep Learning and Tree Search
Sequential decision making problems, such as structured prediction, robotic control, and game playing, require a combination of planning policies and generalisation of those plans.
Learning Multi-Level Hierarchies with Hindsight
Hierarchical agents have the potential to solve sequential decision making tasks with greater sample efficiency than their non-hierarchical counterparts because hierarchical agents can break down tasks into sets of subtasks that only require short sequences of decisions.
Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling
At the same time, advances in approximate Bayesian methods have made posterior approximation for flexible neural network models practical.
Graph Convolutional Reinforcement Learning
The key is to understand the mutual interplay between agents.
ProtoAttend: Attention-Based Prototypical Learning
We propose a novel inherently interpretable machine learning method that bases decisions on few relevant examples that we call prototypes.