Les Inscriptions à la Bibliothèque sont ouvertes en
ligne via le site: https://biblio.enp.edu.dz
Les Réinscriptions se font à :
• La Bibliothèque Annexe pour les étudiants en
2ème Année CPST
• La Bibliothèque Centrale pour les étudiants en Spécialités
A partir de cette page vous pouvez :
Retourner au premier écran avec les recherches... |
Détail de l'auteur
Auteur Panos Y. Papalambros
Documents disponibles écrits par cet auteur
Affiner la rechercheOnline identification and stochastic control for autonomous internal combustion engines / Andreas A. Malikopoulos in Transactions of the ASME . Journal of dynamic systems, measurement, and control, Vol. 132 N° 2 (Mars/Avril 2010)
[article]
in Transactions of the ASME . Journal of dynamic systems, measurement, and control > Vol. 132 N° 2 (Mars/Avril 2010) . - 06 p.
Titre : Online identification and stochastic control for autonomous internal combustion engines Type de document : texte imprimé Auteurs : Andreas A. Malikopoulos, Auteur ; Panos Y. Papalambros, Auteur ; Dennis N. Assanis, Auteur Année de publication : 2010 Article en page(s) : 06 p. Note générale : Systèmes dynamiques Langues : Anglais (eng) Mots-clés : Control engineering computing Internal combustion engines Mechanical engineering computing Index. décimale : 629.8 Résumé : Advanced internal combustion engine technologies have afforded an increase in the number of controllable variables and the ability to optimize engine operation. Values for these variables are determined during engine calibration by means of a tabular static correlation between the controllable variables and the corresponding steady-state engine operating points to achieve desirable engine performance, for example, in fuel economy, pollutant emissions, and engine acceleration. In engine use, table values are interpolated to match actual operating points. State-of-the-art calibration methods cannot guarantee continuously the optimal engine operation for the entire operating domain, especially in transient cases encountered in the driving styles of different drivers. This article presents brief theory and algorithmic implementation that make the engine an autonomous intelligent system capable of learning the required values of controllable variables in real time while operating a vehicle. The engine controller progressively perceives the driver's driving style and eventually learns to operate in a manner that optimizes specified performance criteria. A gasoline engine model, which learns to optimize fuel economy with respect to spark ignition timing, demonstrates the approach. DEWEY : 629.8 ISSN : 0022-0434 En ligne : http://asmedl.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=JDSMAA00013200 [...] [article] Online identification and stochastic control for autonomous internal combustion engines [texte imprimé] / Andreas A. Malikopoulos, Auteur ; Panos Y. Papalambros, Auteur ; Dennis N. Assanis, Auteur . - 2010 . - 06 p.
Systèmes dynamiques
Langues : Anglais (eng)
in Transactions of the ASME . Journal of dynamic systems, measurement, and control > Vol. 132 N° 2 (Mars/Avril 2010) . - 06 p.
Mots-clés : Control engineering computing Internal combustion engines Mechanical engineering computing Index. décimale : 629.8 Résumé : Advanced internal combustion engine technologies have afforded an increase in the number of controllable variables and the ability to optimize engine operation. Values for these variables are determined during engine calibration by means of a tabular static correlation between the controllable variables and the corresponding steady-state engine operating points to achieve desirable engine performance, for example, in fuel economy, pollutant emissions, and engine acceleration. In engine use, table values are interpolated to match actual operating points. State-of-the-art calibration methods cannot guarantee continuously the optimal engine operation for the entire operating domain, especially in transient cases encountered in the driving styles of different drivers. This article presents brief theory and algorithmic implementation that make the engine an autonomous intelligent system capable of learning the required values of controllable variables in real time while operating a vehicle. The engine controller progressively perceives the driver's driving style and eventually learns to operate in a manner that optimizes specified performance criteria. A gasoline engine model, which learns to optimize fuel economy with respect to spark ignition timing, demonstrates the approach. DEWEY : 629.8 ISSN : 0022-0434 En ligne : http://asmedl.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=JDSMAA00013200 [...] A real-time computational learning model for sequential decision-making problems under uncertainty / Andreas A. Malikopoulos in Transactions of the ASME . Journal of dynamic systems, measurement, and control, Vol. 131 N° 4 (Juillet 2009)
[article]
in Transactions of the ASME . Journal of dynamic systems, measurement, and control > Vol. 131 N° 4 (Juillet 2009) . - 08 p.
Titre : A real-time computational learning model for sequential decision-making problems under uncertainty Type de document : texte imprimé Auteurs : Andreas A. Malikopoulos, Auteur ; Panos Y. Papalambros, Auteur ; Dennis N. Assanis, Auteur Année de publication : 2009 Article en page(s) : 08 p. Note générale : dynamic systems Langues : Anglais (eng) Mots-clés : modeling dynamic systems; control policy; simulation-based stochastic framework Résumé : Modeling dynamic systems incurring stochastic disturbances for deriving a control policy is a ubiquitous task in engineering. However, in some instances obtaining a model of a system may be impractical or impossible. Alternative approaches have been developed using a simulation-based stochastic framework, in which the system interacts with its environment in real time and obtains information that can be processed to produce an optimal control policy. In this context, the problem of developing a policy for controlling the system’s behavior is formulated as a sequential decision-making problem under uncertainty. This paper considers the problem of deriving a control policy for a dynamic system with unknown dynamics in real time, formulated as a sequential decision-making under uncertainty. The evolution of the system is modeled as a controlled Markov chain. A new state-space representation model and a learning mechanism are proposed that can be used to improve system performance over time. The major difference between the existing methods and the proposed learning model is that the latter utilizes an evaluation function, which considers the expected cost that can be achieved by state transitions forward in time. The model allows decision-making based on gradually enhanced knowledge of system response as it transitions from one state to another, in conjunction with actions taken at each state. The proposed model is demonstrated on the single cart-pole balancing problem and a vehicle cruise-control problem. DEWEY : 629.8 ISSN : 0022-0434 En ligne : http://dynamicsystems.asmedigitalcollection.asme.org/Issue.aspx?issueID=26497&di [...] [article] A real-time computational learning model for sequential decision-making problems under uncertainty [texte imprimé] / Andreas A. Malikopoulos, Auteur ; Panos Y. Papalambros, Auteur ; Dennis N. Assanis, Auteur . - 2009 . - 08 p.
dynamic systems
Langues : Anglais (eng)
in Transactions of the ASME . Journal of dynamic systems, measurement, and control > Vol. 131 N° 4 (Juillet 2009) . - 08 p.
Mots-clés : modeling dynamic systems; control policy; simulation-based stochastic framework Résumé : Modeling dynamic systems incurring stochastic disturbances for deriving a control policy is a ubiquitous task in engineering. However, in some instances obtaining a model of a system may be impractical or impossible. Alternative approaches have been developed using a simulation-based stochastic framework, in which the system interacts with its environment in real time and obtains information that can be processed to produce an optimal control policy. In this context, the problem of developing a policy for controlling the system’s behavior is formulated as a sequential decision-making problem under uncertainty. This paper considers the problem of deriving a control policy for a dynamic system with unknown dynamics in real time, formulated as a sequential decision-making under uncertainty. The evolution of the system is modeled as a controlled Markov chain. A new state-space representation model and a learning mechanism are proposed that can be used to improve system performance over time. The major difference between the existing methods and the proposed learning model is that the latter utilizes an evaluation function, which considers the expected cost that can be achieved by state transitions forward in time. The model allows decision-making based on gradually enhanced knowledge of system response as it transitions from one state to another, in conjunction with actions taken at each state. The proposed model is demonstrated on the single cart-pole balancing problem and a vehicle cruise-control problem. DEWEY : 629.8 ISSN : 0022-0434 En ligne : http://dynamicsystems.asmedigitalcollection.asme.org/Issue.aspx?issueID=26497&di [...]