When and how do you make decisions about whether or not the features of a particular user story are helpful to the actual user?
When we proceed with the waterfall, we set KPIs for success and failure on which projects, and we checked whether the criteria were reached or failed.
However, the scrum seems to focus more on whether the software works or not according to the Acceptance Criteria of the user story.
Even if the software works fine, the actual user may not like the feature.
Agile seems to aim only to create software that works on a per-sprint basis, and when and how do you set metrics to determine whether or not the software that works has helped real users?
If real help isn't being measured on a per-Sprint basis, perhaps that's the problem to be solved.
Think of the leap-of-faith being taken before the useful outcome of an investment is gauged, and the importance of minimizing this.
There's quite a bit to unpack here.
I'll start with the misconception that "Agile seems to aim only to create software that works on a per-sprint basis". Agile is a set of values and principles. Sprints are a concept from the Scrum framework. There are methods consistent with the values and principles that do not have Sprints or any form of iterations - just look to continuous flow methods like those based on Kanban for an example. Even if you are using Scrum and have Sprints, the Sprint is not a cadence for delivery. The Sprint is a planning horizon and feedback loop. The Scrum Guide says that "the Sprint Review should never be considered a gate to releasing value." Continuous Delivery and Continuous Deployment are consistent with the Scrum framework.
Although this is really an aside. When you deliver work is independent from making sure that the work was truly valuable. Making sure that the work is valuable is an ongoing process.
First, when the team takes on work, some stakeholder must have thought it to be valuable to mention. Members of the Scrum Team shouldn't be adding work to the Product Backlog without a source. The value could be to the Scrum Team itself - an example would be a Product Backlog Item about paying down technical debt or making some enhancements that make it easier to build, test, and deliver the product. The value could also be to an external stakeholder. In any case, at least two people understand the work and why it's valuable - the stakeholder asking for it and the Product Owner.
Consider that the Product Owner is accountable for maximizing the value of the work done by the Scrum Team. Because the Product Owner is aware of the value of each of the Product Backlog Items, though conversations with various stakeholders or stakeholder groups, the Product Owner can make decisions on ordering the Product Backlog Items and making sure that the current state of the Product Backlog is transparent to and understood by all of the key stakeholders. These conversations can get to a deeper understanding of the Product Backlog items and what users may find valuable or not, which would reflect the ordering.
The Scrum framework doesn't get into too much detail around how these conversations unfold. However, this is the domain of product management and requirements engineering. There's been plenty written about these topics in both plan-driven and agile contexts that can help to understand, capture, and sort through competing user perspectives, needs, and wants. The organizational strategies and goals also influence the direction taken by products.
Once a change has been delivered, there are plenty of tools and techniques to monitor how that change is being received. In the software world, we have tools like real user monitoring and other types of observability tools that can record data about how users are interacting with the software and what they are doing to determine if the new features are being used, how often, and similar. There are also ways to establish metrics for success and perform A/B testing using a live system in some contexts. Tracking information from customer support, such as bug reports or feedback and feature requests, can give insights into what people are doing. A lot of the details depend on the type of system - software that powers an online marketplace will have different metrics and success criteria than a system that helps to automate business processes in a manufacturing plant. Understanding doesn't end with delivery.
In Scrum, the Sprint Review is an opportunity to talk to key stakeholders or stakeholder representatives. This is where the Scrum Team can review the work that they've done, review data that they have been able to gather from or about the product, and have two-way conversations with key stakeholders to get feedback about how the work they are doing is being received.