Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It's 100% free, no registration required.

Sign up
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

I have the definition of in-situ algorithm from the professor, but I don't understand it.

In-situ algorithms refer to algorithms that operate with Θ(1) memory.

What does that mean?

share|cite|improve this question
1  
Are you familiar with Landau notation? – David Richerby 6 hours ago
1  
"An algorithm is said to be an in situ algorithm, or in-place algorithm, if the extra amount of memory required to execute the algorithm is O(1), that is, [memory] does not exceed a constant no matter how large the input. For example, heapsort is an in situ sorting algorithm." en.wikipedia.org/wiki/In_situ#Computer_science – Auberon 5 hours ago

Constant space complexity of algorithm

Amount of memory your algorithm uses is independent of input.

An algorithm is said to have constant space complexity if it makes use of fixed amount of space. It can be $10$ variables or an array of exactly $10$ elements.

However, In-situ algorithms perform their intended function on the input itself and thus require very less or no extra space. The input is usually overwritten by the output as the algorithm executes. (ref)
In-situ algorithms do not consider the space occupied by the input and take only the extra space into account, while calculating space complexity.

share|cite

First, let's unpack what $\Theta(1)$ means.

Big $O$, and big $\Theta$, are classes of functions. There's a formal definition here, but for the purposes of this question, we say that a function $f$ is in $O(1)$ if there's a constant $c$ where, for all $x$, $f(x) \leq C$. That is, $f$ grows at most as fast as a constant function.

Big-$\Theta$ doesn't mean much for constant functions, because when describing algorithm time or space usage, there isn't much below constant. But to explain what it means, $f \in \Theta(1)$ if there are some constants $c,d$ such that, for all $x$, $d \leq f(x) \leq c$. That is, $f$ grows at least as fast, and at most as fast, as a constant function.

Now what does this have to do with memory usage? Consider some algorithm $A$. There is some (mathematical) function which, given an input $n$, gives the maximum memory usage of your algorithm $A$ on any input of size $n$. Let's call this function $mem$.

So, now we combine our two concepts. If an algorithm uses $\Theta(1)$ memory, then its memory usage function is in $\Theta(1)$, meaning that there exists some $d,c$ such that, for any input, the memory used is between $d$ and $c$.

In short, this means that the memory usage of the algorithm is in some constant range, regardless of the input.

Usually, the memory function does not account for the memory used to store the input to the algorithm, since otherwise memory usage would always be at least $\Theta(n)$.

share|cite|improve this answer
    
"does not effectively depend on its input." -- for which definition of "effectively"? – Raphael 5 hours ago
    
As in, the memory used can change depending on the input, but only within a fixed interval. Feel free to edit it if you can think of a better wording. – jmite 5 hours ago
    
I don't think there is a better wording than "the memory used is between $d$ and $c$ for any input". Nor is there a need for one. – Raphael 5 hours ago

That means that additional memory amount required for algorithm is not greater than some constant amount that doesn't depend on input size for sufficiently big input.

share|cite|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.