
There are a number of ways to tackle the 'execution time' issue. It is highly contextual, and also requires an understanding of the type of testing the particular type of automation is doing.
I've known teams that disable tests that take too long, and only runs them, manually, sparingly, when they have a reason. I've also seen teams that instead look at what they are testing, considering what is really valuable and prioritizing those versus a whole suite run.
I hope most people don't end up in a situation where, for whatever reason, they need to stop the automation entirely. I have known of teams that have done that though, either because of lack of knowledge of the testing, lack of understanding of the testing, or because they were failing more than they were passing, and the needed knowledge was not in house anymore. For some they may choose to do that along with reprioritizing, or rearranging their test Strategy, which may require writing new tests to replace the old.
Beyond that, you can look at ways to leverage hardware concurrency, parallelizing test execution. This is not a simple programming exercise though, and will not solve all test execution problems either. Another possibility could be to reduce the sets of test data being used, so fewer test cases are actually run. Some may use a pairwise tool, like the one produced by Hexawise to reduce the combinatoric matrix significantly.
I think questions about test execution time, is a chance to engage stakeholders about what we are testing, why we are testing it, and then discussing what is important now and moving forward. Ideally, this should be going on from day one.