1 result for tag: games
Playing chess against the Transformer
We trained a Transformer autoregressive language model to learn to play chess; read on to find out how the experiment played out. Our goal was to provide insights into the type of learning Transformers are capable of, beyond the well-known text generation examples we’ve seen before.
The transformer architecture
Even if you don’t follow the field of Natural Language Processing, you’ve probably heard of the Transformer by now: a neural network architecture that relies on an attention mechanism to understand sequential data. Although not necessarily restricted to text generation, the Transformer has caused a disruption in the NLP ...