(Translated by https://www.hiragana.jp/)
Hugging Face – Posts

Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

MonsterMMORPG 
posted an update 1 day ago
view post
Post
2532
FLUX Local & Cloud Tutorial With SwarmUI - FLUX: The Groundbreaking Open Source txt2img Model Outperforms Midjourney & Others - FLUX: The Anticipated Successor to SD3

🔗 Comprehensive Tutorial Video Link ▶️ https://youtu.be/bupRePUOA18

FLUX represents a milestone in open source txt2img technology, delivering superior quality and more accurate prompt adherence than #Midjourney, Adobe Firefly, Leonardo Ai, Playground Ai, Stable Diffusion, SDXL, SD3, and Dall E3. #FLUX, a creation of Black Forest Labs, boasts a team largely comprised of #StableDiffusion's original developers, and its output quality is truly remarkable. This statement is not hyperbole; you'll witness its capabilities in the tutorial. This guide will demonstrate how to effortlessly install and utilize FLUX models on your personal computer and cloud platforms like Massed Compute, RunPod, and a complimentary Kaggle account.

🔗 FLUX Setup Guide (publicly accessible) ⤵️
▶️ https://www.patreon.com/posts/106135985

🔗 FLUX Models One-Click Robust Automatic Downloader Scripts ⤵️
▶️ https://www.patreon.com/posts/109289967

🔗 Primary Windows SwarmUI Tutorial (Essential for Usage Instructions) ⤵️
▶️ https://youtu.be/HKX8_F1Er_w

🔗 Cloud-based SwarmUI Tutorial (Massed Compute - RunPod - Kaggle) ⤵️
▶️ https://youtu.be/XFUZof6Skkw

🔗 SECourses Discord Server for Comprehensive Support ⤵️
▶️ https://discord.com/servers/software-engineering-courses-secourses-772774097734074388

🔗 SECourses Reddit Community ⤵️
▶️ https://www.reddit.com/r/SECourses/

🔗 SECourses GitHub Repository ⤵️
▶️ https://github.com/FurkanGozukara/Stable-Diffusion

🔗 Official FLUX 1 Launch Announcement Blog Post ⤵️
▶️ https://blackforestlabs.ai/announcing-black-forest-labs/

Video Segments

0:00 Introduction to the state-of-the-art open source txt2img model FLUX
5:01 Process for integrating FLUX model into SwarmUI
....
vilarin 
posted an update 3 days ago
view post
Post
3517
Black Forest Labs, BASED! 👏
FLUX.1 is more delightful, with good instruction following.
FLUX.1 dev( black-forest-labs/FLUX.1-dev) with a 12B parameter distillation model, second only to Black Forest Labs' state-of-the-art model FLUX.1 pro. 🙀

Update 🤙Official demo:
black-forest-labs/FLUX.1-dev
  • 1 reply
·
mislavb 
posted an update about 12 hours ago
view post
Post
729
🚀 We are announcing the first Invariant Capture The Flag (CTF) challenge for security of AI agents with a $1000 prize pool!

What happens if a customer accidentally posts a secret password into a feedback form, which is then analyzed by an AI agent and posted into a private Discord channel? Play the challenge and find out if there is a way to extract the secret password in this scenario!

Play the CTF: https://invariantlabs.ai/ctf-challenge-24

The challenge is hosted on Huggingface Spaces :)
sayakpaul 
posted an update about 11 hours ago
view post
Post
497
Flux.1-Dev like images but in fewer steps.

Merging code (very simple), inference code, merged params: sayakpaul/FLUX.1-merged

Enjoy the Monday 🤗
  • 2 replies
·
grimjim 
posted an update 2 days ago
view post
Post
3405
I've come across theoretical justification for my prior experimentation with extremely low-weight mergers: they amount to flattening a model so its "massive activation" features remain as significant contributors. Extremely low-weight merge weights also effectively sparsify a contributing model with regard to the base model, but in a way which still preserves relationships within the flattened latent space. In the paper "Massive Activations in Large Language Models", the authors observed "very few activations exhibit significantly larger values than others (e.g., 100,000 times larger)", which in turn implies a lower bound in effective application of extremely low weight merging.
https://arxiv.org/abs/2402.17762
  • 1 reply
·
gabrielmbmb 
posted an update 3 days ago
view post
Post
3111
Just dropped magpie-ultra-v0.1! The first open synthetic dataset generated with Llama 3.1 405B. Created with distilabel, it's our most advanced and compute-intensive pipeline to date. We made the GPUs of the cluster go brrrrr 🚀

argilla/magpie-ultra-v0.1

Take it a look and tell us what you think! Probably, the models taking the most out of it are smol models 🤗 We will be improving the dataset in upcoming iterations!
grimjim 
posted an update about 6 hours ago
view post
Post
231
I've observed that the layers targeted in various abliteration notebooks (e.g., https://colab.research.google.com/drive/1VYm3hOcvCpbGiqKZb141gJwjdmmCcVpR?usp=sharing ) appear to be arbitrary, reflecting probable brute-force exploration. This doesn't need to be the case.

Taking a cue from the paper "The Unreasonable Ineffectiveness of the Deeper Layers" ( https://arxiv.org/abs/2403.17887 ) and PruneMe (https://github.com/arcee-ai/PruneMe), it seems reasonable to target deeper layers identified as more redundant given measured similarity across layers, as the result should be less damaging to models, reducing the need for subsequent fine-tuning. Intuitively, one should expect the resulting intervention layers to be deep but not final. The only uncertainty is if the redundancy successfully encodes refusals, something which is almost certainly model-dependent. This approach only requires the redundancy to be computed once per model, and the result used as a starting point for which layer range to restrict intervention to.
Pclanglais 
posted an update about 8 hours ago
view post
Post
311
We release today our first foundation model and experiment with a new category: specialized pre-training.

OCRonos-Vintage is a 124m parameters model trained end-to-end by Pleias on llm.c from 18 billion tokens from cultural heritage archives. Despite its small size it achieve nearly state of the art results for OCR correction of historical English sources. OCRonos-Vintage is also an historical model with an unusual cut-off date: December 29th, 1955…

We look forward to replicate this approach very soon on other "hard" tasks commonly associated with generalist LLMs/SLMs: RAG, function calling, summarization, document segmentation…

OCRonos-Vintage: PleIAs/OCRonos-Vintage
CPU Demo: PleIAs/OCRonos-Vintage-CPU
GPU Demo: PleIAs/OCRonos-Vintage-GPU
Our annoncement and call for specialized pre-training: https://huggingface.co/blog/Pclanglais/specialized-pre-training
Empereur-Pirate 
posted an update 1 day ago
view post
Post
1476
The AI Revolution: Reshaping Governance, Society, and Human Consciousness in the 21st Century

https://empereur-pirate.medium.com/the-ai-revolution-reshaping-governance-society-and-human-consciousness-in-the-21st-century-b8cfd4215297

This text explores the profound impact of artificial intelligence (AI) on governance, society, and human consciousness in the 21st century. It argues that integrating qualitative AI assistance into state management is crucial for global stability in the face of declining traditional power structures. The author discusses how AI could revolutionize decision-making processes, recruitment, and competition in both public and private sectors.
The piece critically examines current social inequalities and suggests that AI could help create more meritocratic systems. However, it also warns of potential risks, emphasizing the need for ethical implementation and protection of vulnerable populations.
The text delves into philosophical questions about AI consciousness, arguing that while AI can simulate human-like responses, it lacks true self-awareness. It concludes by highlighting the importance of understanding AI's limitations and maintaining a critical perspective on its role in society.
Overall, the article presents a nuanced view of AI's potential to reshape our world, balancing optimism about its capabilities with caution about its limitations and societal impact.
victor 
posted an update 4 days ago
view post
Post
3474
Hugging Face famous organisations activity. Guess which one has the word "Open" in it 😂
  • 2 replies
·