If we want to write a Queue class (in c++), one approach to implementing this is to use two stacks. We have an "in stack" and an "out stack," which are basically used to shuffle around the elements when you want to get to the front. The front of the queue would be at the bottom on the "in stack," but if we pop the elements into "out stack" then the front of the queue would be at the very top of the "out stack."
My question deals with the peek method (returning but not removing the front of the queue). I first have to shuffle everything from the "in stack" to the "out stack" and then call peek on the "out stack" to see the front of the queue. How is doing all of this O(1) or constant running time? Taking from what Dave L said in his post here (How to implement a queue using two stacks?)
"...each element will be in each stack exactly once - meaning each element will be pushed twice and popped twice, giving amortized constant time operations."
But how is that O(1) running time still? You still have to go through n elements in the worst case (meaning if everything was in the "in stack" to begin with). I could say the same thing about two arrays then. Take for example I have an array with a million elements and that I want to reverse the order. If I iterate from the back to the front of the filled array while moving the data over to the other array from index 0 to n, I would create O(n) running time as I have to access each element once. How is this different for two stacks?
-Dan Hoynoski
Aucun commentaire:
Enregistrer un commentaire