Environment

To begin to use the library of Gym we could use the following command.

import gym
env = gym.make('MountainCar-v0')
env.reset()

Once that we already import the mountain car enviroment, we could start explore it.

All RL environments have:

  • State space

    • set of all possible states of the environment you can be in

  • Action space

    • actions that you can take within the environment.

We can use the following code:

> print('State space: ',env.observation_space)
  State space: Box(2,)
> print('Action space:', env.action_space)
  Action space: Discrete(3)

We could see that space represent a 2 dimensional box, so each state observation is a vector of 2 values, and that the action spaces comprises three discrete actions.

From this, we can see that the first element of the state vector (representing the cart’s position) can take on any value in the range -1.2 to 0.6, while the second element (representing the cart’s velocity) can take on any value in the range -0.07 to 0.07.

Last updated