실제 적용하면 실패확률이 커진다.
이에대한 대안으로 Q의 업데이트 부분의 일부만 이용하여 처리한다.
실습예제
import gym
import numpy as np
import matplotlib.pyplot as plt
env = gym.make('FrozenLake-v0')
Q = np.zeros([env.observation_space.n, env.action_space.n])
learning_rate = .85
dis = .99
num_episodes = 2000
rList = []
for i in range(num_episodes):
# Reset environment and get first new observation
state = env.reset()
rAll = 0
done = False
while not done:
action = np.argmax(Q[state, :] + np.random.randn(1, env.action_space.n) / (i + 1))
# Get new state and reward from environment
new_state, reward, done, _ = env.step(action)
# Update Q-Table with new knowledge using learning rate
Q[state, action] = (1-learning_rate) * Q[state,action] \
+ learning_rate*(reward + dis * np.max(Q[new_state, :]))
rAll += reward
state = new_state
rList.append(rAll)
print("Score over time: " + str(sum(rList) / num_episodes))
print("Final Q-Table Values")
print(Q)
plt.bar(range(len(rList)), rList, color="blue")
plt.show()
'4Z1 - Artificial Intelligence > 강화학습 개론' 카테고리의 다른 글
DRL -1 : 시작하기 (0) | 2020.09.10 |
---|---|
MRL - 5 : 마지막 (0) | 2020.09.09 |
MRL - 3 (0) | 2020.09.09 |
MRL - 2 (0) | 2020.09.09 |
Recap - 강화 학습 정리 시작하기 (0) | 2020.09.05 |