If the tokenizer encounters neither token, it returns a lexical error.
The tokenizer moves through the stream, inspecting characters one at a time until it encounters either an integer or the end of file.