Stochastic dynamic production control by neurodynamic programming

Monostori, László and Csáji, Balázs Csanád (2006) Stochastic dynamic production control by neurodynamic programming. CIRP Annals-Manufacturing Technology, 55 (1). pp. 473-478.

Full text not available from this repository.

Abstract

The paper proposes Markov Decision Processes (MDPs) to model production control systems that work in uncertain and changing environments. In an MDP finding an optimal control policy can be traced back to computing the optimal value function, which is the unique solution of the Bellman equation. Reinforcement learning methods, such as Q-learning, can be used for estimating this function; however, the value estimations are often only available for a few states of the environment, typically generated by simulation. The paper suggests the application of a new type of support vector regression model, called ν-SVR, which can effectively fit a smooth function to the available data and allow good generalization properties. The effectiveness of the approach is shown by experimental results on both benchmark and industry related data.

Item Type: ISI Article
Uncontrolled Keywords: Production control, Machine learning, Neurodynamic programming
Subjects: Q Science > QA Mathematics and Computer Science > QA75 Electronic computers. Computer science / számítástechnika, számítógéptudomány
Divisions: Research Laboratory on Engineering & Management Intelligence
Depositing User: Eszter Nagy
Date Deposited: 11 Dec 2012 15:26
Last Modified: 25 Jul 2018 12:53
URI: http://eprints.sztaki.hu/id/eprint/4459

Update Item Update Item