โ— PHANTOM
๐Ÿ‡ฎ๐Ÿ‡ณ IN
โœ•
Skip to content

opendilab/DI-engine


Twitter PyPI Conda Conda update PyPI - Python Version PyTorch Version

Loc Comments

Style Read en Docs Read zh_CN Docs Unittest Algotest deploy codecov

GitHub Org's stars GitHub stars GitHub forks GitHub commit activity GitHub issues GitHub pulls Contributors GitHub license Hugging Face Open in OpenXLab discord badge slack badge


Updated on 2024.12.23 DI-engine-v0.5.3

Introduction to DI-engine

Documentation | ไธญๆ–‡ๆ–‡ๆกฃ | Tutorials | Feature | Task & Middleware | TreeTensor | Roadmap

DI-engine is a generalized decision intelligence engine for PyTorch and JAX.

It provides python-first and asynchronous-native task and middleware abstractions, and modularly integrates several of the most important decision-making concepts: Env, Policy and Model. Based on the above mechanisms, DI-engine supports various deep reinforcement learning algorithms with superior performance, high efficiency, well-organized documentation and unittest:

  • Most basic DRL algorithms: such as DQN, Rainbow, PPO, TD3, SAC, R2D2, IMPALA
  • Multi-agent RL algorithms: such as QMIX, WQMIX, MAPPO, HAPPO, ACE
  • Imitation learning algorithms (BC/IRL/GAIL): such as GAIL, SQIL, Guided Cost Learning, Implicit BC
  • Offline RL algorithms: BCQ, CQL, TD3BC, Decision Transformer, EDAC, Diffuser, Decision Diffuser, SO2
  • Model-based RL algorithms: SVG, STEVE, MBPO, DDPPO, DreamerV3
  • Exploration algorithms: HER, RND, ICM, NGU
  • LLM + RL Algorithms: PPO-max, DPO, PromptPG, PromptAWR
  • Other algorithms: such as PER, PLR, PCGrad
  • MCTS + RL algorithms: AlphaZero, MuZero, please refer to LightZero
  • Generative Model + RL algorithms: Diffusion-QL, QGPO, SRPO, please refer to GenerativeRL

DI-engine aims to standardize different Decision Intelligence environments and applications, supporting both academic research and prototype applications. Various training pipelines and customized decision AI applications are also supported:

(Click to Collapse)
  • Traditional academic environments

    • DI-zoo: various decision intelligence demonstrations and benchmark environments with DI-engine.
  • Tutorial courses

  • Real world decision AI applications

    • DI-star: Decision AI in StarCraftII
    • PsyDI: Towards a Multi-Modal and Interactive Chatbot for Psychological Assessments
    • DI-drive: Auto-driving platform
    • DI-sheep: Decision AI in 3 Tiles Game
    • DI-smartcross: Decision AI in Traffic Light Control
    • DI-bioseq: Decision AI in Biological Sequence Prediction and Searching
    • DI-1024: Deep Reinforcement Learning + 1024 Game
  • Research paper

    • InterFuser: [CoRL 2022] Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer
    • ACE: [AAAI 2023] ACE: Cooperative Multi-agent Q-learning with Bidirectional Action-Dependency
    • GoBigger: [ICLR 2023] Multi-Agent Decision Intelligence Environment
    • DOS: [CVPR 2023] ReasonNet: End-to-End Driving with Temporal and Global Reasoning
    • LightZero: [NeurIPS 2023 Spotlight] A lightweight and efficient MCTS/AlphaZero/MuZero algorithm toolkit
    • SO2: [AAAI 2024] A Perspective of Q-value Estimation on Offline-to-Online Reinforcement Learning
    • LMDrive: [CVPR 2024] LMDrive: Closed-Loop End-to-End Driving with Large Language Models
    • SmartRefine: [CVPR 2024] SmartRefine: A Scenario-Adaptive Refinement Framework for Efficient Motion Prediction
    • ReZero: Boosting MCTS-based Algorithms by Backward-view and Entire-buffer Reanalyze
    • UniZero: Generalized and Efficient Planning with Scalable Latent World Models
  • Docs and Tutorials

On the low-level end, DI-engine comes with a set of highly re-usable modules, including RL optimization functions, PyTorch utilities and auxiliary tools.

BTW, DI-engine also has some special system optimization and design for efficient and robust large-scale RL training:

(Click for Details)

Have fun with exploration and exploitation.

Outline

Installation

You can simply install DI-engine from PyPI with the following command:

pip install DI-engine

For more information about installation, you can refer to installation.

And our dockerhub repo can be found here๏ผŒwe prepare base image and env image with common RL environments.

(Click for Details)
  • base: opendilab/ding:nightly
  • rpc: opendilab/ding:nightly-rpc
  • atari: opendilab/ding:nightly-atari
  • mujoco: opendilab/ding:nightly-mujoco
  • dmc: opendilab/ding:nightly-dmc2gym
  • metaworld: opendilab/ding:nightly-metaworld
  • smac: opendilab/ding:nightly-smac
  • grf: opendilab/ding:nightly-grf
  • cityflow: opendilab/ding:nightly-cityflow
  • evogym: opendilab/ding:nightly-evogym
  • d4rl: opendilab/ding:nightly-d4rl

The detailed documentation are hosted on doc | ไธญๆ–‡ๆ–‡ๆกฃ.

Quick Start

3 Minutes Kickoff

3 Minutes Kickoff (colab)

DI-engine Huggingface Kickoff (colab)

How to migrate a new RL Env | ๅฆ‚ไฝ•่ฟ็งปไธ€ไธชๆ–ฐ็š„ๅผบๅŒ–ๅญฆไน ็Žฏๅขƒ

How to customize the neural network model | ๅฆ‚ไฝ•ๅฎšๅˆถ็ญ–็•ฅไฝฟ็”จ็š„็ฅž็ป็ฝ‘็ปœๆจกๅž‹

ๆต‹่ฏ•/้ƒจ็ฝฒ ๅผบๅŒ–ๅญฆไน ็ญ–็•ฅ ็š„ๆ ทไพ‹

ๆ–ฐ่€ pipeline ็š„ๅผ‚ๅŒๅฏนๆฏ”

Feature

Algorithm Versatility

(Click to Collapse)

discrete  discrete means discrete action space, which is only label in normal DRL algorithms (1-23)

continuous  means continuous action space, which is only label in normal DRL algorithms (1-23)

hybrid  means hybrid (discrete + continuous) action space (1-23)

dist  Distributed Reinforcement Learning๏ฝœๅˆ†ๅธƒๅผๅผบๅŒ–ๅญฆไน 

MARL  Multi-Agent Reinforcement Learning๏ฝœๅคšๆ™บ่ƒฝไฝ“ๅผบๅŒ–ๅญฆไน 

exp  Exploration Mechanisms in Reinforcement Learning๏ฝœๅผบๅŒ–ๅญฆไน ไธญ็š„ๆŽข็ดขๆœบๅˆถ

IL  Imitation Learning๏ฝœๆจกไปฟๅญฆไน 

offline  Offiline Reinforcement Learning๏ฝœ็ฆป็บฟๅผบๅŒ–ๅญฆไน 

mbrl  Model-Based Reinforcement Learning๏ฝœๅŸบไบŽๆจกๅž‹็š„ๅผบๅŒ–ๅญฆไน 

other  means other sub-direction algorithms, usually as plugin-in in the whole pipeline

P.S: The .py file in Runnable Demo can be found in dizoo

No. Algorithm Label Doc and Implementation Runnable Demo
1 DQN discrete DQN doc
DQNไธญๆ–‡ๆ–‡ๆกฃ
policy/dqn
python3 -u cartpole_dqn_main.py / ding -m serial -c cartpole_dqn_config.py -s 0
2 C51 discrete C51 doc
policy/c51
ding -m serial -c cartpole_c51_config.py -s 0
3 QRDQN discrete QRDQN doc
policy/qrdqn
ding -m serial -c cartpole_qrdqn_config.py -s 0
4 IQN discrete IQN doc
policy/iqn
ding -m serial -c cartpole_iqn_config.py -s 0
5 FQF discrete FQF doc
policy/fqf
ding -m serial -c cartpole_fqf_config.py -s 0
6 Rainbow discrete Rainbow doc
policy/rainbow
ding -m serial -c cartpole_rainbow_config.py -s 0
7 SQL discretecontinuous SQL doc
policy/sql
ding -m serial -c cartpole_sql_config.py -s 0
8 R2D2 distdiscrete R2D2 doc
policy/r2d2
ding -m serial -c cartpole_r2d2_config.py -s 0
9 PG discrete PG doc
policy/pg
ding -m serial -c cartpole_pg_config.py -s 0
10 PromptPG discrete policy/prompt_pg ding -m serial_onpolicy -c tabmwp_pg_config.py -s 0
11 A2C discrete A2C doc
policy/a2c
ding -m serial -c cartpole_a2c_config.py -s 0
12 PPO/MAPPO discretecontinuousMARL PPO doc
policy/ppo
python3 -u cartpole_ppo_main.py / ding -m serial_onpolicy -c cartpole_ppo_config.py -s 0
13 PPG discrete PPG doc
policy/ppg
python3 -u cartpole_ppg_main.py
14 ACER discretecontinuous ACER doc
policy/acer
ding -m serial -c cartpole_acer_config.py -s 0
15 IMPALA distdiscrete IMPALA doc
policy/impala
ding -m serial -c cartpole_impala_config.py -s 0
16 DDPG/PADDPG continuoushybrid DDPG doc
policy/ddpg
ding -m serial -c pendulum_ddpg_config.py -s 0
17 TD3 continuoushybrid TD3 doc
policy/td3
python3 -u pendulum_td3_main.py / ding -m serial -c pendulum_td3_config.py -s 0
18 D4PG continuous D4PG doc
policy/d4pg
python3 -u pendulum_d4pg_config.py
19 SAC/[MASAC] discretecontinuousMARL SAC doc
policy/sac
ding -m serial -c pendulum_sac_config.py -s 0
20 PDQN hybrid policy/pdqn ding -m serial -c gym_hybrid_pdqn_config.py -s 0
21 MPDQN hybrid policy/pdqn ding -m serial -c gym_hybrid_mpdqn_config.py -s 0
22 HPPO hybrid policy/ppo ding -m serial_onpolicy -c gym_hybrid_hppo_config.py -s 0
23 BDQ hybrid policy/bdq python3 -u hopper_bdq_config.py
24 MDQN discrete policy/mdqn python3 -u asterix_mdqn_config.py
25 QMIX MARL QMIX doc
policy/qmix
ding -m serial -c smac_3s5z_qmix_config.py -s 0
26 COMA MARL COMA doc
policy/coma
ding -m serial -c smac_3s5z_coma_config.py -s 0
27 QTran MARL policy/qtran ding -m serial -c smac_3s5z_qtran_config.py -s 0
28 WQMIX MARL WQMIX doc
policy/wqmix
ding -m serial -c smac_3s5z_wqmix_config.py -s 0
29 CollaQ MARL CollaQ doc
policy/collaq
ding -m serial -c smac_3s5z_collaq_config.py -s 0
30 MADDPG MARL MADDPG doc
policy/ddpg
ding -m serial -c ptz_simple_spread_maddpg_config.py -s 0
31 GAIL IL GAIL doc
reward_model/gail
ding -m serial_gail -c cartpole_dqn_gail_config.py -s 0
32 SQIL IL SQIL doc
entry/sqil
ding -m serial_sqil -c cartpole_sqil_config.py -s 0
33 DQFD IL DQFD doc
policy/dqfd
ding -m serial_dqfd -c cartpole_dqfd_config.py -s 0
34 R2D3 IL R2D3 doc
R2D3ไธญๆ–‡ๆ–‡ๆกฃ
policy/r2d3
python3 -u pong_r2d3_r2d2expert_config.py
35 Guided Cost Learning IL Guided Cost Learningไธญๆ–‡ๆ–‡ๆกฃ
reward_model/guided_cost
python3 lunarlander_gcl_config.py
36 TREX IL TREX doc
reward_model/trex
python3 mujoco_trex_main.py
37 Implicit Behavorial Cloning (DFO+MCMC) IL policy/ibc
model/template/ebm
python3 d4rl_ibc_main.py -s 0 -c pen_human_ibc_mcmc_config.py
38 BCO IL entry/bco python3 -u cartpole_bco_config.py
39 HER exp HER doc
reward_model/her
python3 -u bitflip_her_dqn.py
40 RND exp RND doc
reward_model/rnd
python3 -u cartpole_rnd_onppo_config.py
41 ICM exp ICM doc
ICMไธญๆ–‡ๆ–‡ๆกฃ
reward_model/icm
python3 -u cartpole_ppo_icm_config.py
42 CQL offline CQL doc
policy/cql
python3 -u d4rl_cql_main.py
43 TD3BC offline TD3BC doc
policy/td3_bc
python3 -u d4rl_td3_bc_main.py
44 Decision Transformer offline policy/dt python3 -u d4rl_dt_mujoco.py
45 EDAC offline EDAC doc
policy/edac
python3 -u d4rl_edac_main.py
46 QGPO offline QGPO doc
policy/qgpo
python3 -u ding/example/qgpo.py
47 MBSAC(SAC+MVE+SVG) continuousmbrl policy/mbpolicy/mbsac python3 -u pendulum_mbsac_mbpo_config.py \ python3 -u pendulum_mbsac_ddppo_config.py
48 STEVESAC(SAC+STEVE+SVG) continuousmbrl policy/mbpolicy/mbsac python3 -u pendulum_stevesac_mbpo_config.py
49 MBPO mbrl MBPO doc
world_model/mbpo
python3 -u pendulum_sac_mbpo_config.py
50 DDPPO mbrl world_model/ddppo python3 -u pendulum_mbsac_ddppo_config.py
51 DreamerV3 mbrl world_model/dreamerv3 python3 -u cartpole_balance_dreamer_config.py
52 PER other worker/replay_buffer rainbow demo
53 GAE other rl_utils/gae ppo demo
54 ST-DIM other torch_utils/loss/contrastive_loss ding -m serial -c cartpole_dqn_stdim_config.py -s 0
55 PLR other PLR doc
data/level_replay/level_sampler
python3 -u bigfish_plr_config.py -s 0
56 PCGrad other torch_utils/optimizer_helper/PCGrad python3 -u multi_mnist_pcgrad_main.py -s 0
57 AWR discrete policy/ibc python3 -u tabmwp_awr_config.py

Environment Versatility

(Click to Collapse)
No Environment Label Visualization Code and Doc Links
1 Atari discrete original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
2 box2d/bipedalwalker continuous original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
3 box2d/lunarlander discrete original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
4 classic_control/cartpole discrete original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
5 classic_control/pendulum continuous original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
6 competitive_rl discrete selfplay original dizoo link
็ŽฏๅขƒๆŒ‡ๅ—
7 gfootball discretesparseselfplay original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
8 minigrid discretesparse original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
9 MuJoCo continuous original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
10 PettingZoo discrete continuous marl original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
11 overcooked discrete marl original dizoo link
env tutorial
12 procgen discrete original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
13 pybullet continuous original dizoo link
็ŽฏๅขƒๆŒ‡ๅ—
14 smac discrete marlselfplaysparse original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
15 d4rl offline ori dizoo link
็ŽฏๅขƒๆŒ‡ๅ—
16 league_demo discrete selfplay original dizoo link
17 pomdp atari discrete dizoo link
18 bsuite discrete original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
19 ImageNet IL original dizoo link
็ŽฏๅขƒๆŒ‡ๅ—
20 slime_volleyball discreteselfplay ori dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
21 gym_hybrid hybrid ori dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
22 GoBigger hybridmarlselfplay ori dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
23 gym_soccer hybrid ori dizoo link
็ŽฏๅขƒๆŒ‡ๅ—
24 multiagent_mujoco continuous marl original dizoo link
็ŽฏๅขƒๆŒ‡ๅ—
25 bitflip discrete sparse original dizoo link
็ŽฏๅขƒๆŒ‡ๅ—
26 sokoban discrete Game 2 dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
27 gym_anytrading discrete original dizoo link
env tutorial
28 mario discrete original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
29 dmc2gym continuous original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
30 evogym continuous original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
31 gym-pybullet-drones continuous original dizoo link
็ŽฏๅขƒๆŒ‡ๅ—
32 beergame discrete original dizoo link
็ŽฏๅขƒๆŒ‡ๅ—
33 classic_control/acrobot discrete original dizoo link
็ŽฏๅขƒๆŒ‡ๅ—
34 box2d/car_racing discrete
continuous
original dizoo link
็ŽฏๅขƒๆŒ‡ๅ—
35 metadrive continuous original dizoo link
็ŽฏๅขƒๆŒ‡ๅ—
36 cliffwalking discrete original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
37 tabmwp discrete original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
38 frozen_lake discrete original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
39 ising_model discrete marl original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—
40 taxi discrete original dizoo link
env tutorial
็ŽฏๅขƒๆŒ‡ๅ—

discrete means discrete action space

continuous means continuous action space

hybrid means hybrid (discrete + continuous) action space

MARL means multi-agent RL environment

sparse means environment which is related to exploration and sparse reward

offline means offline RL environment

IL means Imitation Learning or Supervised Learning Dataset

selfplay means environment that allows agent VS agent battle

P.S. some enviroments in Atari, such as MontezumaRevenge, are also the sparse reward type.

General Data Container: TreeTensor

DI-engine utilizes TreeTensor as the basic data container in various components, which is ease of use and consistent across different code modules such as environment definition, data processing and DRL optimization. Here are some concrete code examples:

  • TreeTensor can easily extend all the operations of torch.Tensor to nested data:

    (Click for Details)
    import treetensor.torch as ttorch
    
    
    # create random tensor
    data = ttorch.randn({'a': (3, 2), 'b': {'c': (3, )}})
    # clone+detach tensor
    data_clone = data.clone().detach()
    # access tree structure like attribute
    a = data.a
    c = data.b.c
    # stack/cat/split
    stacked_data = ttorch.stack([data, data_clone], 0)
    cat_data = ttorch.cat([data, data_clone], 0)
    data, data_clone = ttorch.split(stacked_data, 1)
    # reshape
    data = data.unsqueeze(-1)
    data = data.squeeze(-1)
    flatten_data = data.view(-1)
    # indexing
    data_0 = data[0]
    data_1to2 = data[1:2]
    # execute math calculations
    data = data.sin()
    data.b.c.cos_().clamp_(-1, 1)
    data += data ** 2
    # backward
    data.requires_grad_(True)
    loss = data.arctan().mean()
    loss.backward()
    # print shape
    print(data.shape)
    # result
    # <Size 0x7fbd3346ddc0>
    # โ”œโ”€โ”€ 'a' --> torch.Size([1, 3, 2])
    # โ””โ”€โ”€ 'b' --> <Size 0x7fbd3346dd00>
    #     โ””โ”€โ”€ 'c' --> torch.Size([1, 3])
  • TreeTensor can make it simple yet effective to implement classic deep reinforcement learning pipeline

    (Click for Details)
    import torch
    import treetensor.torch as ttorch
    
    B = 4
    
    
    def get_item():
        return {
            'obs': {
                'scalar': torch.randn(12),
                'image': torch.randn(3, 32, 32),
            },
            'action': torch.randint(0, 10, size=(1,)),
            'reward': torch.rand(1),
            'done': False,
        }
    
    
    data = [get_item() for _ in range(B)]
    
    
    # execute `stack` op
    - def stack(data, dim):
    -     elem = data[0]
    -     if isinstance(elem, torch.Tensor):
    -         return torch.stack(data, dim)
    -     elif isinstance(elem, dict):
    -         return {k: stack([item[k] for item in data], dim) for k in elem.keys()}
    -     elif isinstance(elem, bool):
    -         return torch.BoolTensor(data)
    -     else:
    -         raise TypeError("not support elem type: {}".format(type(elem)))
    - stacked_data = stack(data, dim=0)
    + data = [ttorch.tensor(d) for d in data]
    + stacked_data = ttorch.stack(data, dim=0)
    
    # validate
    - assert stacked_data['obs']['image'].shape == (B, 3, 32, 32)
    - assert stacked_data['action'].shape == (B, 1)
    - assert stacked_data['reward'].shape == (B, 1)
    - assert stacked_data['done'].shape == (B,)
    - assert stacked_data['done'].dtype == torch.bool
    + assert stacked_data.obs.image.shape == (B, 3, 32, 32)
    + assert stacked_data.action.shape == (B, 1)
    + assert stacked_data.reward.shape == (B, 1)
    + assert stacked_data.done.shape == (B,)
    + assert stacked_data.done.dtype == torch.bool

Feedback and Contribution

We appreciate all the feedbacks and contributions to improve DI-engine, both algorithms and system designs. And CONTRIBUTING.md offers some necessary information.

Supporters

โ†ณ Stargazers

Stargazers repo roster for @opendilab/DI-engine

โ†ณ Forkers

Forkers repo roster for @opendilab/DI-engine

Citation

@misc{ding,
    title={DI-engine: A Universal AI System/Engine for Decision Intelligence},
    author={Niu, Yazhe and Xu, Jingxin and Pu, Yuan and Nie, Yunpeng and Zhang, Jinouwen and Hu, Shuai and Zhao, Liangxuan and Zhang,  Ming and Liu, Yu},
    publisher={GitHub},
    howpublished={\url{https://github.com/opendilab/DI-engine}},
    year={2021},
}

License

DI-engine released under the Apache 2.0 license.