Não conhecido detalhes sobre roberta pires
architecture. Instantiating a configuration with the defaults will yield a similar configuration to that ofThe original BERT uses a subword-level tokenization with the vocabulary size of 30K which is learned after input preprocessing and using several heuristics. RoBERTa uses bytes instead of unicode characters as the base for subwords and expands