Dify: Finally Built That AI Chatbot Without Writing Code

Tried LangChain? Too complicated. Flowise? Limited features. Dify actually works. Here's how I built a customer support bot in a day.

How I ended up here

Needed a customer support bot for our SaaS. Users keep asking the same stuff - "how do I reset my API key?", "where's the documentation for X?", you know the drill. Support team was drowning in repetitive questions.

Looked at options. Could build it with LangChain - but I'm not really a Python person and the learning curve seemed steep. Checked out Flowise - liked the visual approach but felt limited for what we needed.

Then someone mentioned Dify. Open source, visual builder, has knowledge base built in, you can self-host it. Sounded worth a shot.

What I actually built

Upload PDFs → Auto-chunk → Vector embed → Chatbot interface → Done. Whole thing took like 6 hours from zero to deployed.

Now handles ~70% of support tickets automatically. Team can focus on actual problems instead of answering the same stuff repeatedly.

So what is Dify anyway

It's an open-source platform for building LLM apps. Basically a visual workflow builder with all the AI stuff already wired up - knowledge base, vector database, model integrations, API layer.

The main pieces:

  • Visual workflow: Drag and drop nodes instead of writing Python
  • Model management: Connect GPT-4, Claude, Llama - all in one place
  • Knowledge base: Upload docs, it handles chunking and embeddings
  • RAG out of the box: Retrieval augmented generation, no manual setup
  • API generation: Your workflow becomes a REST endpoint automatically
  • Self-hosted: Data stays on your servers

Nice thing is it doesn't lock you in. Everything's open source. If you outgrow it, you can export and build custom stuff with the components.

Getting it running

Easiest: Docker Compose

This is what I used. One command and everything spins up:

git clone https://github.com/langgenius/dify.git
cd dify/docker
cp .env.example .env
docker compose up -d

Then go to http://localhost

The .env file has sensible defaults. I only changed the passwords:

# Most of this worked out of the box
POSTGRES_PASSWORD=change-this-to-something-secure
REDIS_PASSWORD=and-this-too

# Everything else I left as-is

Or just use the cloud

If you don't care about self-hosting, cloud.dify.ai works fine. Free tier's pretty generous - enough to test it out.

I started with cloud to test it out, then switched to self-hosted once I knew it worked for us.

Building the support bot

Here's exactly what I did to get our support bot working.

Step 1: Upload the docs

Went to "Knowledge" → "Create Knowledge Base" → named it "Product Docs". Then uploaded all our PDF documentation - user guides, API docs, troubleshooting guides. About 50 files total.

Dify handled the rest - chunking, embeddings, vector storage. Took maybe 10 minutes.

Step 2: Create the chatbot

Created a new app, chose "Chatbot" type. Connected GPT-4 (could've used Claude but we already had OpenAPI setup). Selected the knowledge base I just created.

For the system prompt, I wrote something simple:

You're a helpful support assistant for [product].
Answer questions using the provided context from our documentation.
If you don't know the answer based on the context, say so - don't make things up.
Keep responses concise but friendly.

Step 3: Tweak the settings

Took some testing to get right:

// What ended up working well
Temperature: 0.3 (lower = more focused)
Max Tokens: 1500
Top K: 4 (retrieve 4 relevant chunks)
Score Threshold: 0.5 (filter weak matches)

// These settings gave good answers without being too slow

Step 4: Test and iterate

Used the built-in preview. Asked real questions users actually ask. Some answers were off - adjusted the chunk size from 500 to 1000 characters, overlap from 50 to 200. That helped a lot.

Also refined the system prompt a few times based on edge cases.

Step 5: Deploy

Clicked deploy, got an embed widget. Pasted it into our help center. That was it.

Whole thing maybe 6 hours spread over two days. Most of that was testing and tweaking, not actual setup.

Other stuff I built

After the support bot worked well, tried some other things:

Email triage

We get tons of emails. Built a workflow that:

Email comes in
    ↓
Classify with GPT-4 (urgent/sales/support/spam)
    ↓
Route based on classification:
- Urgent → Slack alert
- Sales → Add to CRM
- Support → Create ticket
- Spam → Delete

Saved our support lead maybe 2 hours per day. She was manually sorting through everything before.

Internal knowledge search

Uploaded all our internal docs - policies, procedures, meeting notes. Now anyone can just ask "what's our refund policy?" or "how did we solve that issue last month?" and actually get answers.

Huge for onboarding new people. They don't have to bug everyone constantly.

Document summarizer

Sometimes we get long PDFs from partners or vendors. Built a quick workflow that summarizes them. Not perfect but gives the gist in 30 seconds instead of reading 20 pages.

Running it in production

Scaling

Started with one Docker instance. Eventually added more replicas and a proper database once we had more users:

# Added to docker-compose.yml
services:
  api:
    deploy:
      replicas: 3
      resources:
        limits:
          cpus: '2'
          memory: 4G

  worker:
    deploy:
      replicas: 2

Monitoring

Dify has built-in analytics. I can see token usage, costs, response times. Set up alerts when something's slow or failing.

Nice to know exactly what the AI is costing us per month.

Backups

Simple cron job to backup the database:

# Daily backup
docker exec dify-db pg_dump -U postgres dify > backup_$(date +%Y%m%d).sql

Stuff that went wrong

Retrieval was bad at first

Answers were irrelevant or missing the point.

# Fixed by adjusting chunking
Chunk size: 500 → 1000
Overlap: 50 → 200
# Also switched from OpenAI embeddings to HuggingFace
# Better for our technical content

Rate limits

OpenAPI cut us off a few times during busy periods.

# Fixed by
1. Adding request queuing in Dify
2. Caching common questions
3. Using GPT-3.5 for simple queries
4. Upgrading our OpenAI plan limits

Slow responses

Some queries took 10+ seconds. Users hated that.

# Fixed by
1. Reducing retrieved chunks (k=8 → k=4)
2. Using faster model for retrieval
3. Only using GPT-4 for final answer
# Got it down to ~3 seconds

Memory leak

Container memory kept growing. Had to restart weekly.

# Fixed by
1. Setting up auto-restart policy
2. Updating Dify to latest version
3. Adding memory limits in docker-compose
# Seems stable now

Dify vs the others

Dify Flowise LangChain
Setup time Hours Hours Weeks
Knowledge base UI Built-in Manual Manual
API layer Auto-generated Basic Build yourself
Monitoring Built-in Limited Add yourself
Coding required No No Yes
Self-hosted Yes Yes Yes
Best for Production apps Prototyping Custom stuff

Dify hits a sweet spot. More complete than Flowise, way easier than raw LangChain. Good for actually shipping something.

Would I recommend it?

Yeah, definitely. Our support bot went from idea to production in a day. It's not perfect - sometimes gives wrong answers, sometimes hallucinates a bit. But for 70% of queries, it's good enough.

The visual builder is nice but what I really appreciate is the complete package. Knowledge base, workflows, APIs, monitoring - it's all there. Didn't have to stitch together a bunch of different tools.

Self-hosting was important for us - we deal with sensitive customer data and didn't want that going through third-party services. Dify made that easy.

We've since built 4 more tools on it: internal search, email triage, document summarizer, and a code assistant for our devs. Each one faster than the last.

If you need to ship AI features quickly and don't want to deal with LangChain complexity, give Dify a shot. The cloud version is free to try - takes like 5 minutes to see if it works for you.

Links: dify.ai | Docs: docs.dify.ai | GitHub: github.com/langgenius/dify