UAT issues found after a sprint has to be logged as a bug? ( if they are enhancements)
In our team, after every sprint, the management is asking to conduct a week of UAT to go over the test ready user stories. If UAT users find any observations, consider it as enhancements, then what should be the process to be followed? Should we log a defect in Jira and then convert it to a user story and close that defect resolution as new requirements? Could you please outline a common process followed across in such a confusing situation?
There are a couple of ways to handle this.
Personally, I prefer to consider UAT as a validation exercise that exists outside of the Scrum framework. At least once a Sprint, the Development Team creates a potentially releasable increment and the Product Owner can choose to release this increment. In this particular situation, the Product Owner would release the increment to UAT. I would recommend logging any findings appropriately to identify deviations in expected behavior versus modifications based on the experience.
However, some teams consider the completion of UAT as part of their Definition of Done. A week-long UAT cycle at the end of the Sprint may make this more difficult. In this case, the closure or resolution of UAT feedback would be done prior to the Sprint Review and the end result would be an increment that, upon the Product Owner's decision, could be released.
There may be other ways as well, but these are the two that I've seen work.
I have experience in both scenario's. One is where UAT findings are taken on not as part of same sprint / increment. Second where it is.
The important things to note here, in my opinion, is twofold: 1. since or if the users within the UAT are outside the team, the team has no control over its outcome. Since the team should be (self) organized to have all the capabilities onboard to deliver a Done increment, UAT is not part of this. In that line of reasoning, UAT itself is not part of the Definition of Done, and thus not part of the Done Increment.
#2 to note, if you are organizing UAT as last week of the sprint, you have no continuous developement cycle and more or less a (miniature) waterfall in an agile process. This is not desired.
Many times, it all comes down to being able to release early and often! If you can release multiple times per sprint or even per day, there is no real necessity for UAT findings to be holding back any release of increment. If fixes for finding can be released to the endusers often, it is even desired to not make the release of the increment wait, since the earlier you release, the earlier the feedback loop is created for the increment.
So, in any debate about UAT, the ability to release is a very strong factor (even if this is not directly transparent), the it may also hold the solution for the debate.
My opinion is that your Defintion of Done would not include User Acceptance Testing (UAT) and that the Product Owner's decision on releasing would be whether the increment is potentially releasable to UAT. Any feedback received during UAT would be handled exactly as if it came from users in Production. It might be deteremined that even though an item was created during UAT, the product could still be released to Production without that item being addressed. It is an opportunity to inspect and adapt the product so items would be created in the Product Backlog if the Product Owner determines it is work needed for the product. Since you specifically stated Jira, I would make everything that comes in fro UAT feedback a new User Story.
I agree with @Xander Ladage that youir organization might consider releasing to UAT on a more frequent basis. But that isn't always an acceptable idea. If the people performing the UAT are not able to test frequently, then giving them smaller increments isn't really a benefit. So be smart about what you do and try to get as much feedback as you can as soon as possible.
In our team, after every sprint, the management is asking to conduct a week of UAT to go over the test ready user stories.
That suggests each Sprint is, in truth, at least another week long. It will take at least another week before a "Done" increment is produced. Is this understood by all?
If UAT users find any observations, consider it as enhancements, then what should be the process to be followed?
They're Development Team members, because their work is needed to deliver a properly tested increment by the end of each Sprint. Why don't they discuss any potential enhancements with the Product Owner during Product Backlog refinement? He or she can then consider them, and decide whether or not to include them on the Product Backlog.