1–20 von 56 Ergebnissen
|

    Trajectory Synthesis for the Coordinated Inspection of a Spacecraft with Safety Guarantees

    Hibbard, Michael / Cubuktepe, Murat / Shubert, Matthew et al. | AIAA | 2023
    Schlagwörter: Markov Decision Process

    Multirobot Navigation Using Partially Observable Markov Decision Processes with Belief-Based Rewards

    Tzikas, Alexandros E. / Knowles, Derek / Gao, Grace X. et al. | AIAA | 2023
    Schlagwörter: Markov Decision Process

    Optimization of Directional Sensor Orientation with Application to Sun Sensing

    Springmann, John C. / Cutler, James W. | AIAA | 2014
    Schlagwörter: Gauss Markov Theorem

    Robust Space Trajectory Design Using Belief Optimal Control

    Greco, Cristian / Campagnola, Stefano / Vasile, Massimiliano | AIAA | 2022
    Schlagwörter: Markov Decision Process

    Bayesian Reliability Analysis of the Enhanced Multimission Radioisotope Thermoelectric Generator

    Lee, Chung H. / Caillat, Thierry / Pinkowski, Stanley | AIAA | 2023
    Schlagwörter: Markov Chain Monte Carlo-Based Bayesian Method

    Characterization and Control of Hysteretic Dynamics Using Online Reinforcement Learning

    Kirkpatrick, Kenton / Valasek, John / Haag, Chris | AIAA | 2013
    Schlagwörter: Markov Decision Process

    Synthesizing Failure Detection, Isolation, and Recovery Strategies from Nondeterministic Dynamic Fault Trees

    Müller, Sascha / Gerndt, Andreas / Noll, Thomas | AIAA | 2018
    Schlagwörter: Markov Decision Process

    Composition of Safety Constraints for Fixed-Wing Collision Avoidance Amidst Limited Communications

    Squires, Eric / Pierpaoli, Pietro / Konda, Rohit et al. | AIAA | 2022
    Schlagwörter: Markov Decision Process

    Optimal Feedback Guidance of a Small Aerial Vehicle in a Stochastic Wind

    Anderson, Ross P. / Bakolas, Efstathios / Milutinović, Dejan et al. | AIAA | 2013
    Schlagwörter: Markov Decision Process

    Generation of Spacecraft Operations Procedures Using Deep Reinforcement Learning

    Harris, Andrew / Valade, Trace / Teil, Thibaud et al. | AIAA | 2021
    Schlagwörter: Markov Decision Process

    Reinforcement Learning of a Morphing Airfoil-Policy and Discrete Learning Analysis

    Lampton, Amanda / Niksch, Adam / Valasek, John | AIAA | 2010
    Schlagwörter: Markov Decision Process

    Monte Carlo Tree Search Methods for the Earth-Observing Satellite Scheduling Problem

    Herrmann, Adam P. / Schaub, Hanspeter | AIAA | 2021
    Schlagwörter: Markov Decision Process

    Reachability Analysis for Neural Network Aircraft Collision Avoidance Systems

    Julian, Kyle D. / Kochenderfer, Mykel J. | AIAA | 2021
    Schlagwörter: Markov Decision Process

    Imitation Learning-Based Unmanned Aerial Vehicle Planning for Multitarget Reconnaissance Under Uncertainty

    Choi, Uihwan / Ahn, Jaemyung | AIAA | 2019
    Schlagwörter: Markov Decision Process

    Methodology for Path Planning with Dynamic Data-Driven Flight Capability Estimation

    Singh, Victor / Willcox, Karen E. | AIAA | 2017
    Schlagwörter: Markov Decision Process

    Developing Mathematical Formulations for the Integrated Problem of Sensors, Weapons, and Targets

    Ezra, Kristopher L. / DeLaurentis, Daniel A. / Mockus, Linas et al. | AIAA | 2016
    Schlagwörter: Markov Decision Process

    Distributed Wildfire Surveillance with Autonomous Aircraft Using Deep Reinforcement Learning

    Julian, Kyle D. / Kochenderfer, Mykel J. | AIAA | 2019
    Schlagwörter: Markov Decision Process

    Markov Decision Process Framework for Flight Safety Assessment and Management

    Balachandran, Sweewarman / Atkins, Ella | AIAA | 2016
    Schlagwörter: Markov Decision Process

    Reinforcement Learning for Robust Trajectory Design of Interplanetary Missions

    Zavoli, Alessandro / Federici, Lorenzo | AIAA | 2021
    Schlagwörter: Markov Decision Process

    Dynamic Resource Allocation for Efficient Sharing of Services from Heterogeneous Autonomous Vehicles

    Kaddouh, Bilal Y. / Crowther, William J. / Hollingsworth, Peter | AIAA | 2016
    Schlagwörter: Markov Decision Process