GermanEval/germeval_14
Updated • 521 • 4
How to use mhemon/ner_model with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("token-classification", model="mhemon/ner_model") # Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("mhemon/ner_model")
model = AutoModelForTokenClassification.from_pretrained("mhemon/ner_model")This model is a fine-tuned version of bert-base-german-cased on the germeval_14 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|---|---|---|---|---|---|---|---|
| 0.1274 | 1.0 | 3000 | 0.1132 | 0.9671 | 0.8144 | 0.8031 | 0.8260 |
| 0.065 | 2.0 | 6000 | 0.1382 | 0.9690 | 0.8301 | 0.8452 | 0.8155 |
| 0.0365 | 3.0 | 9000 | 0.1446 | 0.9703 | 0.8348 | 0.8311 | 0.8386 |
Base model
google-bert/bert-base-german-cased