Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get the full evaluation list of each environment #1010

Closed
Lukeeeeee opened this issue Apr 22, 2018 · 2 comments
Closed

How to get the full evaluation list of each environment #1010

Lukeeeeee opened this issue Apr 22, 2018 · 2 comments

Comments

@Lukeeeeee
Copy link

Like https://gym.openai.com/evaluations/eval_aqTWbALwQEKrLIyU9ZzmLw/ this one, is there any list of each environments's evaluation since most of environments' page did show the evaluation results like this :https://gym.openai.com/envs/Reacher-v2/ .

Also, when it said

Reacher-v1 is considered "solved" when the agent obtains an average reward of at least -3.75 over 100 consecutive episodes.

"Average reward" here means the mean of cumulative rewards(sum of one step reward within one episode) over 100 episodes?

Thanks!.

@teostoleru
Copy link

@Lukeeeeee the only thing I could find is this comment where they compare v1 and v2 environments: #834

@christopherhesse
Copy link
Contributor

I believe this information is located here https://github.com/openai/gym/blob/master/gym/envs/__init__.py#L10 in the code, though it only exists for some environments and is not as relevant since we don't run the scoreboard anymore.

I think you're correct, this should mean that you take all the rewards in each episode, sum them together, and then calculate the per-episode average over 100 episodes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants