Uncensored Multimodal Power: Qwen3.5-35B-A3B Aggressive Variant
HauhauCS/Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive ↗
The **Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive** model is an uncensored, aggressive fork of the original Qwen3.5‑35B‑A3B released by the community contributor HauhauCS. With over 210 k downloads and a trending score of 548, it has quickly become a focal point for users seeking a fully unlocked large‑scale multimodal LLM. The model is marketed as having "0/465 refusals," meaning it does not block any user prompts and only appends brief, non‑refusal disclaimer text when required.
Technically, the model retains the 35 billion‑parameter architecture of its base, employing a Mixture‑of‑Experts (MoE) design with 256 experts (8 routed + 1 shared per token) and a hybrid attention scheme (Gated DeltaNet linear attention plus full softmax attention). It supports a massive 262 K native context (extendable to 1 M with YaRN) and is natively multimodal, handling text, images, and video. The model is multilingual, covering 201 languages across English and Chinese, and is packaged in GGUF format with a range of quantizations from BF16 to IQ2_M for diverse hardware needs. Vision capabilities are enabled via a companion `mmproj` file.
The repository provides detailed usage instructions for GGUF‑compatible runtimes such as llama.cpp, LM Studio, Jan, and koboldcpp. Recommended generation settings differentiate between "thinking" and "non‑thinking" modes, with temperature, top‑p, and presence‑penalty tweaks for general, coding, or reasoning tasks. Users are advised to keep a minimum context of 128 K tokens to preserve the model's reasoning abilities and to use the `--jinja` flag for proper chat template handling. The aggressive uncensoring makes the model suitable for applications where content moderation is intentionally omitted, while a more conservative "Balanced" variant is mentioned for future reference.
Its popularity stems from the combination of high capacity, extensive multimodal support, and the removal of safety filters, offering developers a powerful sandbox for experimentation in image‑text generation, multilingual visual QA, and long‑context reasoning without the constraints typical of commercial LLMs.
Project Ideas
- Create an uncensored multimodal chatbot that answers questions based on both text and uploaded images.
- Build an image‑captioning service that generates detailed, multilingual descriptions without content restrictions.
- Develop a visual question‑answering tool for educational content, leveraging the model's 262K context to handle long passages and diagrams.
- Implement a story‑generation assistant that takes a series of images as prompts and writes continuous narratives in any of the supported 201 languages.
- Design a research sandbox for probing long‑context reasoning by feeding the model up to 1 M tokens of mixed text and video transcripts.