Connect with us

Sports

Mastering table tennis with hierarchy: an approach to learning theory with progressive self -play training

Mastering table tennis with hierarchy: an approach to learning theory with progressive self -play training

qq

 


  • Li Y (2017) Learning from deep reinforcement: an overview. Arxiv: 1701.07274

  • Ibarz J, Tan J, Finn C, Kalakrishnan M, Pastor P, Levine S (2021) How you can train your robot with Deeg reinforcement Learning: Lessons we have learned. Int J Robot Res 40 (45): 698721

    Article

    Google Scholar

  • Lample G, Chaplot DS (2017) FPS games play with deep reinforcement. In: Proceedings of the aaai Conference on Artificial Intelligence, Vol. 31

  • Yung Y, Junago L, Singling P (2020) Multi-Robole Pad planned on the basement of the Algoritym. Trans Intelll Technol 5: 177183

    Article
    MATHEMATICS

    Google Scholar

  • Arulkumaran K, DEIZENROTH MP, Brunge M, Bharath Aa (2017) Learn deep reinforcement: a short study. Ieee Signal Process Mag 34 (6): 2638

    Article

    Google Scholar

  • Franois-Lavet V, Henderson P, Islam R, Bellemare MG, Pineau J et al (2018) An introduction to learning deep reinforcement. Trends Mach Learn found 11 (34): 219354

    Article
    MATHEMATICS

    Google Scholar

  • Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Silver D, Wierstra D (2015) Continuous control with deep reinforcement. Arxiv: 1509,02971

  • Atkeson CG, Santamaria JC (1997) a comparison of direct and model -based reinforcement learning. In: Proceedings of International Conference on Robotics and Automation, Vol. 4. IEEE, pp 35573564

  • Barto AG, Mahadevan S (2003) Recent developments in learning hierarchical reinforcement. Discreet Event Dyn Syst 13 (12): 4177

    Article
    Mathscinet
    MATHEMATICS

    Google Scholar

  • Mahjourian R, Miikkulainen R, Lazic N, Levine S, Jaitly N (2018) Hierarchical policy design for sample -efficient learning from robot table Tennis through suggestion. Arxiv: 1811.12927

  • Tebbe J, Krauch L, GAO Y, Zell A (2021) Learn sample-efficient reinforcement in robot-like table tennis. In: 2021 IEEE International Conference on Robotics and Automation (ICRA). Ieee, pp 41714178

  • GAO W, Graesser L, Choromanski K, Song X, Lazic N, Sanketi P, Sindhwani V, Jaitly N (2020) Robot -like table tennis with model -free reinforcement learning. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (Iros). IEEE, pp 55565563

  • Wang Y, Sun Z, Luo Y, Zhang H, Zhang W, Dong K, HE Q, Zhang Q, Cheng E, Song B (2023) A new route-based Ball Spin estimation method for table tennis robot. IEEE trans Ind electron 111. https://doi.org/10.1109/tie.2023.3319743

  • Wang Y, Luo Y, Zhang H, Zhang W, Dong K, HE Q, Zhang Q, Cheng E, Sun Z, Song B (2023) A tables of robot control strategy for returning high-speed spinning ball. IEEE/ASME Trans Mechatron 110. https://doi.org/10.1109/tmech.2023.3316165

  • Ding T, Graesser L, Abeyrawan S, Dambrosio DB, Shankar A, Sermanet P, Sanketi PR, Lynch C (2022) High speed precision table tennis on a physical robot. In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Pp 1078010787. https://doi.org/10.1109/iros47612.2022.9982205

  • Ma H, Bchler D, Schlkopf B, Muehlebach M (2023) Reinforcement education with model-based feedforward inputs for robot table tennis. Auton Robots 117

  • MA H, FAN J, Wang Q (2022) A new Ping-Pong Task strategy based on model-free multidimensional Q function Deep reinforcement learning. In: 2022 8th International Conference on Systems and Informatics (ICSAI). Ieee, pp 16

  • Al-Emran M (2015) Learning hierarchical reinforcement: a study. Int J Comput Digital Syst 4 (02)

  • Yuan J, Zhang J, Yan J (2022) to solving industrial sequential decision -making tasks under almost predictable dynamics through reinforcement education: an implicit approach to estimating corrective value

  • Cuayhuitl H, Dethlefs N, Frommberger L, Richter KF, Bateman Ja (2010) The generation of adaptive route instructions using hierarchical reinforcement. In: Spatial Cognition, Vol. 7. Springer, pp 319334

  • Xu x, Huang T, Wei P, Narayan A, Leong Ty (2020) Hierarchical reinforcement Learning in Starcraft II with human expertise in subgoals selection. Arxiv: 2008.03444

  • Dethlefs N, Cuayhuitl H (2015) Hierarchical reinforcement Learning for generating natural language. Wet Lang Eng 21 (3): 391435

    Article
    MATHEMATICS

    Google Scholar

  • Araki B, Li X, VODRAHALLI K, Decastro J, Fry M, Rus D (2021) The logical optic frame. In: International conference on machine learning. PMLR, pp 307317

  • Nachum O, Gu SS, Lee H, Levine S (2018) Data efficient Hierarchical reinforcement Learn. Adve Neural Inf -Process Syst 31

  • Bacon PL, Harb J, Precup D (2017) The option critic architecture. In: Proceedings of the aaai Conference on Artificial Intelligence, Vol. 31

  • Kulkarni TD, Narasimhan K, Saeedi A, Tenenbaum J (2016) Hierarchical deep reinforcement Learning: integration of temporary abstraction and intrinsic motivation. Adve Neural Inf Process Syst 29

  • Ji Y, Li Z, Sun Y, Peng XB, Levine S, Berseth G, Sreenath K (2022) Hierarchical reinforcement Learning for precise football shooting skills using a quadruple robot. In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROs). pp 14791486. https://doi.org/10.1109/iros47612.2022.9981984

  • Understand y, chi y, chi y, yang l, peng xb, sreenth ketch of dynamic robust robots goals. International confession is in intelligent roots and systems. pp 271527222 https://doi.org/10.1109/iros5552.2023.10341936

  • Hu R, Zhang Y (2022) Fast pad planning for long -distance planetary wandering based on a hierarchical framework and learning deep reinforcement. Aerospace 9 (2): 101

    Article
    MATHEMATICS

    Google Scholar

  • Bai Y, Jin C (2020) Probable self -play algorithms for competitive reinforcement education. In: International conference on machine learning. PMLR, pp 551560

  • Hernandez D, Denamgana K, GAO Y, York P, Devlin S, Samothrakis S, Walker Ja (2019) A general framework for self -play training. In: 2019 IEEE conference on games (COG). Ieee, pp 18

  • Zhang H, Yu T (2020) Alphazero. Learning from deep reinforcement: basic principles, research and applications. pp 391415

  • Brando B, De Lima TW, Soares A, Melo L, Maximo MROA (2022) Multi -agent reinforcement Learning for strategic decision -making and control in robot football by suggestion. Ieee Access 10: 7262872642. https://doi.org/10.1109/Access.2022.3189021

    Article

    Google Scholar

  • Lin F, Huang S, Pearce T, Chen W, TU WW (2023) Tizero: Mastering multi-agent football with curriculum Learning and suggestion. Arxiv: 2302.07515

  • Wang X, Thomas JD, Piechocki RJ, Kapoor S, Santos-Rodrguez R, Parekh A (2022) Self-Play Learning Strategies for Resource Allocation in Open-Ran Networks. Comput Netw 206: 108682

    Article

    Google Scholar

  • Andersson RL (1989) Aggressive route generator for a robot ping-pong player. Ieee control system may 9 (2): 1521

    Article
    MATHEMATICS

    Google Scholar

  • Lin Hi, Yu Z, Huang YC (2020) Ball-tracking and route prediction for table tennis robots. Sensors 20 (2): 333

    Article
    MATHEMATICS

    Google Scholar

  • Miyazaki F, Matsushima M, Takeuchi M (2006) Learn to manipulate dynamically: a table tennis robot controls a ball and rallies with a person. Progress in robot control: Fromememay physics to human-like movements. https://doi.org/10.1007/978-3-540-37347-6_15

  • Ko O, Maeda G, Peters J (2018) Online optimum trajectory generation for robot table tennis. Robot Auton Syst 105: 121137. https://doi.org/10.1016/j.robot.2018.03.012

    Article
    MATHEMATICS

    Google Scholar

  • Mlling K, Kober J, Peters J (2010) Simulation of human table tennis with a biomimetic robot setup. pp 273282. https://doi.org/10.1007/978-3-642-15193-4_26

  • Huang Y, Xu D, Tan M, Su H (2011) Trajectory forecast of rotating ball for Ping-Pong Player Robot. Pp 34343439. https://doi.org/10.1109/iros.2011.6095044

  • Kyohei A, Masamune N, Satoshi Y (2020) The Pingpong Robot to give exactly a ball back. Omron Technics 51:16

    MATHEMATICS

    Google Scholar

  • Zhao Y, Xiong R, Zhang Y (2017) Model-based movement status estimate and route prediction of rotating ball for ping-pong robots using expectations-maximization algorithm. J Intell Robot Syst 87 (3): 407423

    Article
    MATHEMATICS

    Google Scholar

  • Lin Hi, Huang Yc (2019) Follow BallRaject and prediction for a ping-pong robot. In: 2019 9th International Conference on Information Science and Technology (ICIST). IEEE, pp 222227

  • Abeyrawan SW, Graesser L, Dambrosio DB, Singh A, Shankar A, Bezley A, Jain D, Choromanski KM, Sanketi PR (2023) I-Sim2Real: Robot policy reinforcement policy in robot policy in sleek human-robot interactions. In: Conference on robot leather. PMLR, pp 212224

  • Bchler D, Guist S, Calandra R, Berenz V, Schlkeopf B, Peters J (2022) Learn to play tennis with the help of muscle robots. Ieee trans -robotics

  • Zhu Y, Zhao Y, Jin L, Wu J, Xiong R (2018) On the way to high-level skills: Learn Table Tennis Ball return using Monte-Carlo-based policy gradient method. In: 2018 IEEE International Conference on Real -time Computing and Robotics (RCAR). pp 3441. https://doi.org/10.1109/rcar.2018.8621776

  • Tebbe J, GAO Y, Sastre-Rienietz M, Zell A (2019) A table tennis robot system using an industrial Kuka robotarm. In: Brox T, Bruhn A, Fritz M (EDS) Pattern recognition. Springer, Cham, pp 3345

    Chapter

    Google Scholar

  • Tebbe J (2022) Adaptive robot systems in very dynamic environments: a table tennis robot. PhD -thesis, Universitt Tbingen

  • GAO Y, Tebbe J, Zell A (2023) Learning to learn how to learn how to be a policy gradient approach for robot table tennis. Appl Intell 53 (11): 1330913322

    Article
    MATHEMATICS

    Google Scholar

  • Kaelbling LP, Littman ML, Moore AW (1996) Learning reinforcement: a study. J Artif Intell Res 4: 2

    Article
    MATHEMATICS

    Google Scholar

  • Puterman ML (1990) Markov decision processes. Handbooks Oper RES Management SCI 2: 331434

    Mathscinet
    MATHEMATICS

    Google Scholar

  • Watkins CJ, Dayan P (1992) Q-Learning. Mach Learn 8: 279292

    Article
    MATHEMATICS

    Google Scholar

  • Fan J, Wang Z, Xie Y, Yang Z (2020) A theoretical analysis of deep Q-Learning. In: Learning for dynamics and control. PMLR, pp 486489

  • Konda V, Tsitsiklis J (1999) Actor-Critic algorithms. Adve Neural Inf Process Syst 12: 100

    MATHEMATICS

    Google Scholar

  • Peters J, Vijayakumar S, Scale S (2005) Natural Actor-Critic. In: Machine Learning: ECML 2005: 16th European Conference on Machine Learning, Porto, Portugal, 3-7 October 2005. Proceedings 16. Springer, pp 280291

  • Haarnoja t, Zhou A, Abbeel P, Levine S (2018) Soft Actor-Critic: Off-Policy Maximum Entropie Deep reinforcement Learning with a stochastic actor. In: International conference on machine learning. PMLR, pp 18611870

  • Senadeeera M, Karimpanal TG, Gupta S, Rana S (2022) Sympathy-based Reinforcement Learning Agents. In: Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems. pp 11641172

  • Sources

    1/ https://Google.com/

    2/ https://link.springer.com/article/10.1007/s10489-025-06450-0

    The mention sources can contact us to remove/changing this article

    What Are The Main Benefits Of Comparing Car Insurance Quotes Online

    LOS ANGELES, CA / ACCESSWIRE / June 24, 2020, / Compare-autoinsurance.Org has launched a new blog post that presents the main benefits of comparing multiple car insurance quotes. For more info and free online quotes, please visit https://compare-autoinsurance.Org/the-advantages-of-comparing-prices-with-car-insurance-quotes-online/ The modern society has numerous technological advantages. One important advantage is the speed at which information is sent and received. With the help of the internet, the shopping habits of many persons have drastically changed. The car insurance industry hasn't remained untouched by these changes. On the internet, drivers can compare insurance prices and find out which sellers have the best offers. View photos The advantages of comparing online car insurance quotes are the following: Online quotes can be obtained from anywhere and at any time. Unlike physical insurance agencies, websites don't have a specific schedule and they are available at any time. Drivers that have busy working schedules, can compare quotes from anywhere and at any time, even at midnight. Multiple choices. Almost all insurance providers, no matter if they are well-known brands or just local insurers, have an online presence. Online quotes will allow policyholders the chance to discover multiple insurance companies and check their prices. Drivers are no longer required to get quotes from just a few known insurance companies. Also, local and regional insurers can provide lower insurance rates for the same services. Accurate insurance estimates. Online quotes can only be accurate if the customers provide accurate and real info about their car models and driving history. Lying about past driving incidents can make the price estimates to be lower, but when dealing with an insurance company lying to them is useless. Usually, insurance companies will do research about a potential customer before granting him coverage. Online quotes can be sorted easily. Although drivers are recommended to not choose a policy just based on its price, drivers can easily sort quotes by insurance price. Using brokerage websites will allow drivers to get quotes from multiple insurers, thus making the comparison faster and easier. For additional info, money-saving tips, and free car insurance quotes, visit https://compare-autoinsurance.Org/ Compare-autoinsurance.Org is an online provider of life, home, health, and auto insurance quotes. This website is unique because it does not simply stick to one kind of insurance provider, but brings the clients the best deals from many different online insurance carriers. In this way, clients have access to offers from multiple carriers all in one place: this website. On this site, customers have access to quotes for insurance plans from various agencies, such as local or nationwide agencies, brand names insurance companies, etc. "Online quotes can easily help drivers obtain better car insurance deals. All they have to do is to complete an online form with accurate and real info, then compare prices", said Russell Rabichev, Marketing Director of Internet Marketing Company. CONTACT: Company Name: Internet Marketing CompanyPerson for contact Name: Gurgu CPhone Number: (818) 359-3898Email: cgurgu@internetmarketingcompany.BizWebsite: https://compare-autoinsurance.Org/ SOURCE: Compare-autoinsurance.Org View source version on accesswire.Com:https://www.Accesswire.Com/595055/What-Are-The-Main-Benefits-Of-Comparing-Car-Insurance-Quotes-Online View photos

    ExBUlletin

    to request, modification Contact us at Here or collaboration@support.exbulletin.com