AI & ML interests

OpenFree_AI

telcom 
posted an update 3 days ago
view post
Post
152
if you are interested in HUB (https://saemi410.github.io/HUB/ I recommend the fork I have created with some updates to make it smooth in running a smoke test [email protected]:javadtaghia/HUB.git) and you want to run the UCE (https://unified.baulab.info), please check:
- Model weights for UCE here: telcom/uce_NSFW
- Model weights for ESD here: telcom/esd_NSFW
- datasets and more download materials from: telcom/HUB_reference_dataset

Please read the notes in the model card.
telcom 
posted an update 9 days ago
view post
Post
245
NVIDIA’s Groq deal ... I think, inference efficiency is becoming the main driver of profitability, and NVIDIA’s Groq deal is evidence the market is moving from “who can train biggest” to “who can serve cheapest and fastest at scale.” That points to a maturing phase of AI, not necessarily the end of a bubble, but definitely a correction in what “wins” long-term.
What do you think?
  • 2 replies
·
telcom 
posted an update 11 days ago
view post
Post
176
CIFAR-10 your handing image dataset ...
CIFAR-10 is a small, standard computer-vision dataset used to quickly test and compare ideas.

- 60,000 color images, each 32×32 pixels, labeled into 10 classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
- Label mapping (important):

- 0 airplane
- 1 automobile
- 2 bird
- 3 cat
- 4 deer
- 5 dog
- 6 frog
- 7 horse
- 8 ship
- 9 truck
- Split: 50,000 train and 10,000 test.
- Why people use it: fast benchmarking for image classifiers (small CNNs, ResNet, ViT), and quick experiments for training pipelines, augmentation, regularization, pruning, distillation, and demos.
- Sizes (downloads): Python version about 163 MB, binary about 162 MB. Hugging Face shows about 144 MB for the dataset files.
- Where to get it: the official CIFAR page (University of Toronto) and the Hugging Face CIFAR-10 dataset page.
uoft-cs/cifar10
If you want something more, check the table below
| Dataset | Resolution | Classes | Best For |
| ImageNet 1K | 224–256×256 | 1000 | Real-world large-scale classification |
| ImageNet-256. | 256×256 | 1000 | Direct high-res training |
| TinyImageNet | 64×64 | 200 | Mid-range benchmark |
| UC Merced Land Use | 256×256 | ~21 | Higher resolution small classification |
| MS COCO | >256×256 | ~80 objects | Detection / segmentation |
telcom 
posted an update 13 days ago
view post
Post
2042
arXiv CS endorsement

It's Javad, my Google Scholar Profile:
https://scholar.google.com/citations?user=bja6GwoAAAAJ&hl=en
I would like to share my articles with you on Hugging Face, I'm asking for endorsement* in Computer Science arxiv.org.

If you would like to endorse me, please visit the following URL:
https://arxiv.org/auth/endorse?x=NVUAPL
If that URL does not work for you, please visit
http://arxiv.org/auth/endorse.php
and enter the following six-digit alphanumeric string:
Endorsement Code: NVUAPL

Thanks you in advance.
Javad Taghia

* Who is qualified to endorse?

To endorse another user to submit to the cs.AI (Artificial Intelligence) subject class, an arXiv submitter must have submitted 3 papers to any of cs.AI, cs.AR, cs.CC, cs.CE, cs.CG, cs.CL, cs.CR, cs.CV, cs.CY, cs.DB, cs.DC, cs.DL, cs.DM, cs.DS, cs.ET, cs.FL, cs.GL, cs.GR, cs.GT, cs.HC, cs.IR, cs.IT, cs.LG, cs.LO, cs.MA, cs.MM, cs.MS, cs.NA, cs.NE, cs.NI, cs.OH, cs.OS, cs.PF, cs.PL, cs.RO, cs.SC, cs.SD, cs.SE, cs.SI or cs.SY earlier than three months ago and less than five years ago.

telcom 
posted an update 25 days ago
view post
Post
264
Recently I was playing with my model . What is your idea about "unlearning" since I need it 😀
telcom/deewaiREALCN, I have the original one on the main branch and trained version "cp550" and "n_680" on anther branch.
Both trained on telcom/deewaiREALCN-training.
I got three results when doing prompt:
"Athlete portrait, 26-year-old woman, post-training sweat, gym ambient light, chalk dust particles, intense gaze, crisp detail."
Apparently, model is sensitive to the word "old".
You can see the training on more faces improved from main, however, still not ideal...
I am working now on unlearning. I would like to hear about your opinion.
#unlearning


openfree 
updated a Space 6 months ago
openfree 
published a Space 6 months ago