eriktks/conll2003
Updated β’ 39.3k β’ 166
How to use ELHACHYMI/bert-ner with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("token-classification", model="ELHACHYMI/bert-ner") # Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("ELHACHYMI/bert-ner")
model = AutoModelForTokenClassification.from_pretrained("ELHACHYMI/bert-ner")Model: ELHACHYMI/bert-ner
Base model: bert-base-uncased
Task: Token Classification β Named Entity Recognition (NER)
Dataset: CoNLL-2003 (English)
This model is a fine-tuned version of BERT Base Uncased on the CoNLL-2003 Named Entity Recognition (NER) dataset.
It predicts the following entity types:
The model is suitable for information extraction, document understanding, chatbot entity detection, and structured text processing.
The model uses the standard IOB2 tagging scheme:
| ID | Label |
|---|---|
| 0 | O |
| 1 | B-PER |
| 2 | I-PER |
| 3 | B-ORG |
| 4 | I-ORG |
| 5 | B-LOC |
| 6 | I-LOC |
| 7 | B-MISC |
| 8 | I-MISC |
from transformers import pipeline
ner = pipeline("ner", model="ELHACHYMI/bert-ner", aggregation_strategy="simple")
text = "Bill Gates founded Microsoft in the United States."
print(ner(text))
Base model
google-bert/bert-base-uncased