Reinforcement learning in a distributed market-based production control system

Csáji, Balázs Csanád and Monostori, László and Kádár, Botond (2006) Reinforcement learning in a distributed market-based production control system. Advanced Engineering Informatics, 20 (3). pp. 279-288.

Full text not available from this repository.

Abstract

The paper presents an adaptive iterative distributed scheduling algorithm that op- erates in a market-based production control system. The manufacturing system is agenti¯ed, thus, every machine and job is associated with its own software agent. Each agent learns how to select presumably good schedules, by this way the size of the search space can be reduced. In order to get adaptive behavior and search space reduction, a triple-level learning mechanism is proposed. The top level of learning incorporates a simulated annealing algorithm, the middle (and the most important) level contains a reinforcement learning system, while the bottom level is done by a numerical function approximator, such as an arti¯cial neural network. The paper suggests a cooperation technique for the agents, as well. It also analyzes the time and space complexity of the solution and presents some experimental results.

Item Type: ISI Article
Uncontrolled Keywords: dynamic scheduling, multi-agent systems, reinforcement learning
Subjects: Q Science > QA Mathematics and Computer Science > QA75 Electronic computers. Computer science / számítástechnika, számítógéptudomány
Divisions: Research Laboratory on Engineering & Management Intelligence
Depositing User: Eszter Nagy
Date Deposited: 11 Dec 2012 15:26
Last Modified: 25 Jul 2018 12:52
URI: http://eprints.sztaki.hu/id/eprint/4449

Update Item Update Item