


Main.py: entry point and command line interpreter. For more advanced users: main.py selfplay dqn_train -c will start training the deep Q agent with c++ montecarlo for faster calculationĪt the end of an episode, the performance of the players can be observed via the summary plot.To use it, use the -c option when running main.py. In order to use the c++ version of the equity calculator, you will also need to install visual studio 2019 (or gcc over cygwin may work as well).Example of genetic algorighm with self improvement: main.py selfplay equity_improvement -improvement_rounds=20 -episodes=10.To manually control the players: main.py selfplay keypress -render.Run 6 random players playing against each other:.Activate it with conda activate neuron_poker, then install all required packages with pip install -r requirements.txt.Create a virtual environment with conda create -n neuron_poker python=3.7.

Install Anaconda, I would also recommend to install pycharm.So we can collaborate and create the best possible player. Please try to model your own players and create a pull request This is an environment for training neural networks to play texas Neuron Poker: OpenAi gym environment for texas holdem poker
