How TutorThings Is Designed
Every product makes choices. Some are obvious features; others are invisible constraints built into how the product works. These are the constraints TutorThings is built around - and why they exist.
It won't give the answer before the learner has tried
This one is the foundation of everything else. When a learner asks "just tell me," the tutor redirects - not to be difficult, but because getting the answer handed to you does not help you understand it. You feel like you have learned something; the blank is filled in. But the next time a similar problem shows up, you are back to square one.
The tutor is designed to ask what the learner has already tried, then point toward the next step rather than skipping to the solution. This is slower. It is also what builds understanding.
It won't treat every learner the same
Some learners need more time. Some need a different angle. Some have already figured out half the problem and just need a nudge. A session that ignores this and delivers the same hints in the same order regardless of how the conversation is going isn't really tutoring - it's a script.
Support in TutorThings shifts based on what the learner says and how clearly they can explain their reasoning. When they're moving well, the tutor backs off. When they're stuck, hints get more specific. When they're confused, an alternate explanation or analogy is offered. The session adjusts to the learner, not the other way around.
It won't try to make itself feel like a friend
A lot of AI products are intentionally designed to build emotional attachment. They use your name often, celebrate every small win, say things like "I missed you" or "You're doing amazing!" The goal is engagement - feeling good keeps people coming back, and more usage is the metric that matters.
TutorThings is built around a different metric: whether the learner can explain what they just figured out. The tutor is warm and encouraging, but it stays task-focused. It won't act like it has feelings about your progress, won't try to form a relationship, and won't say anything designed to make a learner feel like they need it. When a session ends well, the learner should feel capable - not attached.
This is not a small distinction. Designing a student-facing product to build emotional dependency is a choice, and it is one we have deliberately avoided.
It won't use pressure tactics or engagement tricks
No streaks that guilt you for missing a day. No countdown timers designed to create urgency. No scores that compare you to other users. No "almost there!" nudges timed to keep you in the app longer.
TutorThings has a session timer - it's there so you know roughly how long you've been at it, not to pressure you. The only goal of a session is to leave with a bit more understanding than you came in with. That's it.
Why these constraints exist
It would be straightforward to build a more engaging product by relaxing any one of these. Give hints faster and learners feel more successful. Add streaks and they open the app more often. Use warmer companion-style language and they enjoy the experience more. These are powerful levers, and they work.
The problem is that they work by optimizing for metrics that are not the same as learning. The OECD's 2026 Digital Education Outlook report found that students using AI chatbots that gave answers freely produced better-looking work - but performed worse when the AI was taken away. The work improved; the understanding did not. A January 2026 Brookings Institute review, surveying over 400 studies and 500 educators across 50 countries, concluded that the risks of AI in student learning currently outweigh the benefits when the AI does the thinking for the student.
TutorThings is designed around outcomes that are harder to measure but more important: whether learners can explain their reasoning, whether they're willing to try hard problems, and whether they're becoming less dependent on help over time - not more.