AI is no longer just a backend feature or a buzzy headline. It’s starting to shape how teams actually work together, in how they communicate, how they make decisions, and how they ship. But at Creed, we’re not treating it like magic. We treat it like any other tool: with curiosity, clarity, and care.
What’s surprised me most isn’t that AI can write a paragraph or suggest a line of code. It’s that AI is quietly becoming a collaboration layer for teams to compress knowledge, reduce friction, and stay aligned when the pace gets chaotic. We’re seeing that shift show up in our client work and inside our own delivery teams, and it’s changing the “shape” of a project in practical, measurable ways.
This post is for leaders and practitioners who want new collaboration techniques, not hype. It shares patterns we’re seeing in the wild across Slack and Teams, Jira and Confluence, knowledge bases, and engineering workflows, along with a few “steal-this” ways our development, design, and project management teams at Creed are using AI to collaborate better.
When clients tell us “we want to use AI,” what they often mean is: we want work to move without breaking people.
In practice, the best collaboration wins fall into four buckets:
That last one is where AI starts to feel like a teammate, not a tool. It’s also where you need the most intentional design and governance.
Most organizations don’t want another dashboard. They want AI to show up where work already happens: messaging, project tracking, documentation, and dev environments.
The most common collaboration pain we hear is some version of:
“I know we talked about this… but I can’t find it.”
That’s why Slack/Teams is becoming the natural home for AI copilots. One of the example patterns we’ve been building and testing is a Slack AI Assistant that pulls information across knowledge tools (like Notion or Confluence) and pushes summaries back into Slack, so the team gets answers without going on a scavenger hunt.
Practical ways we see teams using this pattern:
If this sounds familiar, there’s a simple way to try this with your own team.
Copy and paste this prompt:
“Summarize this thread for someone who missed it. Include: (1) decisions made, (2) open questions, (3) action items with owners if mentioned, and (4) any deadlines or risks.”
The best teams don’t use AI to replace conversation. They use it to make conversation retrievable.
The second most common pain:
“We had a great conversation… and then nothing happened.”
This is where AI shines as a translation layer—turning messy inputs into structured work. In our own internal thinking, the “quietly useful” uses are things like summarizing feedback before a sprint or drafting acceptance criteria faster—work that speeds up the flow without adding confusion.
High-impact use cases:
Try this workflow:
- Paste raw notes (or a Confluence page) into secure AI.
- Ask for 3 candidate Jira tickets, each with: user story, acceptance criteria, and definition-of-done checklist.
- Ask AI to flag unknowns that require human decisions.
This is especially powerful when paired with a human rule: AI can draft; humans decide. That principle keeps teams moving while protecting quality and accountability.
Many companies already have the knowledge they need. The challenge is getting to it at the moment it matters, without breaking focus or relying on the one person who happens to know where everything lives.
That’s why teams are starting to experiment with internal GPTs and retrieval-based assistants that can pull answers from multiple systems, including tools like Notion, Confluence, Slack, and SharePoint. In these setups, AI is not creating new knowledge. It is helping teams access what already exists, across places where information is often fragmented.
In our AI advisory work, we often see this explored through proof-of-concept efforts around AI-enhanced knowledge management. Sometimes that means aggregating internal documents and messages so employees can ask questions and get consistent answers. Other times, the first step is simply giving teams secure access to approved AI models inside company boundaries, so they can use AI without risking data exposure.
The collaboration benefit is not just faster answers. It is fewer interruptions, fewer context switches, and less “tribal knowledge” locked in a single person’s head. Over time, that shared access to information makes collaboration calmer and more resilient.
As soon as AI starts summarizing, recommending, or routing work, trust becomes a product requirement. That’s why we consistently push clients (and ourselves) toward clarity: labels, permissions, audit trails, and an explicit understanding of what the AI is doing and why.
In our internal AI implementation planning, we’ve treated governance, training, and rollout as core work and not an afterthought. Because adoption dies quickly when teams feel unsure or exposed.
We’ve learned that AI collaboration isn’t “one tool.” It’s a set of team habits.
Inside Creed, these collaboration shifts do not show up as one big initiative or a single AI tool. They show up differently depending on the work being done. Below are a few examples of how our design, project management, and engineering teams are using AI in small, repeatable ways to collaborate more effectively.
Designing for AI means designing for interaction. We’re seeing (and building) interfaces that include prompt fields, regenerate/edit paths, confidence indicators, and clear labeling. These are small UI decisions that keep humans in control while AI carries part of the load.
Internally, our designers use AI in a few consistent ways:
A subtle but important shift: AI makes it cheaper to explore. The design team’s job becomes “explore widely, decide deliberately.”
Project managers have always been the collaboration engine. AI doesn’t replace that. It gives PMs leverage.
A few patterns we’ve leaned into:
These are the “quiet wins” that make teams feel calmer while delivery speeds up.
On the engineering side, the most meaningful shift isn’t autocomplete—it’s agentic workflows.
We’ve been exploring tools and practices that let AI handle multi-step tasks while humans supervise, validate, and steer. In our internal material on Claude Code, a few collaboration-friendly practices stand out:
This improves collaboration because it changes what engineers bring to the team. Less time spent grinding through boilerplate, more time spent aligning on architecture, risk, and tradeoffs.
In other words: AI doesn’t just speed up code. It makes it easier for engineers to stay present in the team’s work.
8 collaboration techniques you can try next week
Here are practical techniques that don’t require a reorg or a moonshot platform migration:
- The “missed it” summary (Slack/Teams): decisions, open questions, next steps.
- The meeting-to-tickets bridge (Jira): raw notes → stories + acceptance criteria + unknowns.
- The “draft-first” spec (Confluence): outline the doc, then collaborate on what matters.
- The retro synthesizer: cluster feedback, propose 3 experiments for next sprint.
- The knowledge concierge: “Where is the latest doc? What changed since last sprint?”
- The code review buddy: AI drafts review notes; humans validate and decide.
- The onboarding copilot: answer repeat questions from SOPs + quizzes (we’ve even explored this as a formal onboarding pattern).
- The governance checklist: permissions, labeling, audit trails—trust is part of collaboration.
AI isn’t transforming everything overnight. But it is changing how teams design, build, and collaborate—especially when it’s embedded into real workflows instead of bolted on as a novelty.
At Creed, we’re focused on tools and patterns that support real work: staying in control, using time well, and building trust in the systems we ship.
And as we help clients evaluate readiness, build proof-of-concepts, and create rollout playbooks, we keep coming back to the same principle: AI creates value when it reduces friction and increases clarity.
If you’re experimenting right now, start small—but start where collaboration hurts most. Pick one workflow (threads → summaries, notes → tickets, docs → drafts), set guardrails, and measure whether your team feels more aligned a week later.
That’s the real benchmark: not “did we use AI,” but did we collaborate better because we did?