source stringclasses 4
values | url stringlengths 56 73 | file_type stringclasses 1
value | chunk stringlengths 3 511 | chunk_id stringlengths 5 5 |
|---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/how_to.md | https://huggingface.co/docs/datasets/en/how_to/#overview | .md | The how-to guides offer a more comprehensive overview of all the tools ๐ค Datasets offers and how to use them. This will help you tackle messier real-world datasets where you may need to manipulate the dataset structure or content to get it ready for training.
The guides assume you are familiar and comfortable with t... | 0_0_0 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/how_to.md | https://huggingface.co/docs/datasets/en/how_to/#overview | .md | <Tip>
Interested in learning more? Take a look at [Chapter 5](https://huggingface.co/course/chapter5/1?fw=pt) of the Hugging Face course!
</Tip>
The guides are organized into six sections:
- <span class="underline decoration-sky-400 decoration-2 font-semibold">General usage</span>: Functions for general dataset... | 0_0_1 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/how_to.md | https://huggingface.co/docs/datasets/en/how_to/#overview | .md | - <span class="underline decoration-pink-400 decoration-2 font-semibold">Audio</span>: How to load, process, and share audio datasets.
- <span class="underline decoration-yellow-400 decoration-2 font-semibold">Vision</span>: How to load, process, and share image and video datasets.
- <span class="underline decoration-g... | 0_0_2 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/how_to.md | https://huggingface.co/docs/datasets/en/how_to/#overview | .md | - <span class="underline decoration-orange-400 decoration-2 font-semibold">Tabular</span>: How to load, process, and share tabular datasets.
- <span class="underline decoration-indigo-400 decoration-2 font-semibold">Dataset repository</span>: How to share and upload a dataset to the <a href="https://huggingface.co/data... | 0_0_3 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/tutorial.md | https://huggingface.co/docs/datasets/en/tutorial/#overview | .md | Welcome to the ๐ค Datasets tutorials! These beginner-friendly tutorials will guide you through the fundamentals of working with ๐ค Datasets. You'll load and prepare a dataset for training with your machine learning framework of choice. Along the way, you'll learn how to load different dataset configurations and splits,... | 1_0_0 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/tutorial.md | https://huggingface.co/docs/datasets/en/tutorial/#overview | .md | The tutorials assume some basic knowledge of Python and a machine learning framework like PyTorch or TensorFlow. If you're already familiar with these, feel free to check out the [quickstart](./quickstart) to see what you can do with ๐ค Datasets.
<Tip> | 1_0_1 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/tutorial.md | https://huggingface.co/docs/datasets/en/tutorial/#overview | .md | <Tip>
The tutorials only cover the basic skills you need to use ๐ค Datasets. There are many other useful functionalities and applications that aren't discussed here. If you're interested in learning more, take a look at [Chapter 5](https://huggingface.co/course/chapter5/1?fw=pt) of the Hugging Face course.
</Tip> ... | 1_0_2 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co/docs/datasets/en/installation/#installation | .md | Before you start, you'll need to setup your environment and install the appropriate packages. ๐ค Datasets is tested on **Python 3.7+**.
<Tip>
If you want to use ๐ค Datasets with TensorFlow or PyTorch, you'll need to install them separately. Refer to the [TensorFlow installation page](https://www.tensorflow.org/inst... | 2_0_0 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co/docs/datasets/en/installation/#installation | .md | </Tip> | 2_0_1 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co/docs/datasets/en/installation/#virtual-environment | .md | You should install ๐ค Datasets in a [virtual environment](https://docs.python.org/3/library/venv.html) to keep things tidy and avoid dependency conflicts.
1. Create and navigate to your project directory:
```bash
mkdir ~/my-project
cd ~/my-project
```
2. Start a virtual environment inside your directory:
```bas... | 2_1_0 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co/docs/datasets/en/installation/#virtual-environment | .md | # Deactivate the virtual environment
source .env/bin/deactivate
```
Once you've created your virtual environment, you can install ๐ค Datasets in it. | 2_1_1 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co/docs/datasets/en/installation/#pip | .md | The most straightforward way to install ๐ค Datasets is with pip:
```bash
pip install datasets
```
Run the following command to check if ๐ค Datasets has been properly installed:
```bash
python -c "from datasets import load_dataset; print(load_dataset('squad', split='train')[0])"
```
This command downloads versio... | 2_2_0 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co/docs/datasets/en/installation/#pip | .md | ```python | 2_2_1 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co/docs/datasets/en/installation/#pip | .md | {'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']}, 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms uprais... | 2_2_2 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co/docs/datasets/en/installation/#pip | .md | is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects t... | 2_2_3 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co/docs/datasets/en/installation/#pip | .md | of Mary.', 'id': '5733be284776f41900661182', 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?', 'title': 'University_of_Notre_Dame'} | 2_2_4 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co/docs/datasets/en/installation/#pip | .md | ``` | 2_2_5 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co/docs/datasets/en/installation/#audio | .md | To work with audio datasets, you need to install the [`Audio`] feature as an extra dependency:
```bash
pip install datasets[audio]
```
<Tip warning={true}>
To decode mp3 files, you need to have at least version 1.1.0 of the `libsndfile` system library. Usually, it's bundled with the python [`soundfile`](https://g... | 2_3_0 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co/docs/datasets/en/installation/#audio | .md | For Linux, the required version of `libsndfile` is bundled with `soundfile` starting from version 0.12.0. You can run the following command to determine which version of `libsndfile` is being used by `soundfile`:
```bash
python -c "import soundfile; print(soundfile.__libsndfile_version__)"
```
</Tip> | 2_3_1 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co/docs/datasets/en/installation/#vision | .md | To work with image datasets, you need to install the [`Image`] feature as an extra dependency:
```bash
pip install datasets[vision]
``` | 2_4_0 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co/docs/datasets/en/installation/#source | .md | Building ๐ค Datasets from source lets you make changes to the code base. To install from the source, clone the repository and install with the following commands:
```bash
git clone https://github.com/huggingface/datasets.git
cd datasets
pip install -e .
```
Again, you can check if ๐ค Datasets was properly installed... | 2_5_0 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co/docs/datasets/en/installation/#conda | .md | ๐ค Datasets can also be installed from conda, a package management system:
```bash
conda install -c huggingface -c conda-forge datasets
``` | 2_6_0 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/about_arrow.md | https://huggingface.co/docs/datasets/en/about_arrow/#what-is-arrow | .md | [Arrow](https://arrow.apache.org/) enables large amounts of data to be processed and moved quickly. It is a specific data format that stores data in a columnar memory layout. This provides several significant advantages:
* Arrow's standard format allows [zero-copy reads](https://en.wikipedia.org/wiki/Zero-copy) which... | 3_0_0 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/about_arrow.md | https://huggingface.co/docs/datasets/en/about_arrow/#what-is-arrow | .md | * Arrow is language-agnostic so it supports different programming languages.
* Arrow is column-oriented so it is faster at querying and processing slices or columns of data.
* Arrow allows for copy-free hand-offs to standard machine learning tools such as NumPy, Pandas, PyTorch, and TensorFlow.
* Arrow supports many, p... | 3_0_1 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/about_arrow.md | https://huggingface.co/docs/datasets/en/about_arrow/#memory-mapping | .md | ๐ค Datasets uses Arrow for its local caching system. It allows datasets to be backed by an on-disk cache, which is memory-mapped for fast lookup.
This architecture allows for large datasets to be used on machines with relatively small device memory.
For example, loading the full English Wikipedia dataset only takes a... | 3_1_0 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/about_arrow.md | https://huggingface.co/docs/datasets/en/about_arrow/#memory-mapping | .md | # Process.memory_info is expressed in bytes, so convert to megabytes
>>> mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
>>> wiki = load_dataset("wikipedia", "20220301.en", split="train")
>>> mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) | 3_1_1 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/about_arrow.md | https://huggingface.co/docs/datasets/en/about_arrow/#memory-mapping | .md | >>> print(f"RAM memory used: {(mem_after - mem_before)} MB")
RAM memory used: 50 MB
```
This is possible because the Arrow data is actually memory-mapped from disk, and not loaded in memory.
Memory-mapping allows access to data on disk, and leverages virtual memory capabilities for fast lookups. | 3_1_2 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/about_arrow.md | https://huggingface.co/docs/datasets/en/about_arrow/#performance | .md | Iterating over a memory-mapped dataset using Arrow is fast. Iterating over Wikipedia on a laptop gives you speeds of 1-3 Gbit/s:
```python
>>> s = """batch_size = 1000
... for batch in wiki.iter(batch_size):
... ...
... """ | 3_2_0 |
/Users/nielsrogge/Documents/python_projecten/datasets/docs/source/about_arrow.md | https://huggingface.co/docs/datasets/en/about_arrow/#performance | .md | >>> elapsed_time = timeit.timeit(stmt=s, number=1, globals=globals())
>>> print(f"Time to iterate over the {wiki.dataset_size >> 30} GB dataset: {elapsed_time:.1f} sec, "
... f"ie. {float(wiki.dataset_size >> 27)/elapsed_time:.1f} Gb/s")
Time to iterate over the 18 GB dataset: 31.8 sec, ie. 4.8 Gb/s
``` | 3_2_1 |
README.md exists but content is empty.
- Downloads last month
- 4