Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Join them; it only takes a minute:

Sign up
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

What are some applications of abstract algebra in computer science an undergraduate could begin exploring after a first course?

Gallian's text goes into Hamming distance, coding theory, etc., I vaguely recall seeing discussions of abstract algebra in theory of computation / automata theory but what else? I'm not familiar with any applications past this.

Bonus: What are some textbooks / resources that one could learn about said applications?

share|cite|improve this question
    
For a related question see Uses of algebraic structures in theoretical computer science on TCS. – J.-E. Pin 9 hours ago
    
Though well intentioned, this question is hopelessly broad. – djechlin 1 hour ago
    
Both in the c.s. and algebra sides. Are graphs and trees algebraic structures? Does every application of linear algebra count as an application? Does every algorithm applied in algebra count as computer science? What if we prove it is NP hard? Are number theoretic results algebra? What about counting? When I use the binomial theorem to analyze Bernoulli random variables when studying $\mathbf{RP}$ and $\mathbf{BPP}$ is that abstract algebra enough? – djechlin 2 mins ago

10 Answers 10

up vote 10 down vote accepted

Algebra is incredibly useful in computer science. I'll preface my answer with my opinion: I view a good portion of computer science as a branch of mathematics. So my answer will be quite broad. Note that my opinion is one that not everyone shares.

Algebraic Combinatorics and Graph Theory: Recall Cayley's Theorem from group theory, which states that every group is the subgroup of some Symmetry group. That is, every group is a permutation group. This immediately motivates the study of algebraic combinatorics. Group actions are used to study symmetries, or automorphisms, of mathematical objects. Informally, a group action is a dynamical process on an object, which partitions its members into sets which we call orbits. The study of the structure and quantity of these orbits yields important combinatorial results. You have likely seen the Dihedral group, which is the automorphism group of the Cycle graph. In general, finding the full automorphism group of an object (such as a graph) is a non-trivial exercise.

There are several applications that come to mind. The first is Polya Enumeration Theory. Given a mathematical object (such as a graph), we color the vertices (this is not necessarily a proper coloring, in the sense that two adjacent vertices may receive the same color). Then given some subgroup $H$ of automorphisms (such as rotations of a necklace), how many distinct colorings exist? That is, two colorings are equivalent if they belong to the same orbit under the action of $H$. Polya enumeration theory studies these types of problems. If your combinatorics is weak (and even if it isn't), Loehr's Bijective Combinatorics text is an excellent resource.

In algebraic graph theory, it is common to examine the Cayley Graph for a group. Given a group $G$ and a set $S$, the Cayley Graph $X(G, S)$ has vertex set $G$, and two elements $g, h$ are connected by an edge if $hg^{-1} \in S$. Intuitively, the Cayley Graph tells us if an element in $S$ can be used to take us from $g$ to $h$. We may examine a related graph of transpositions from $\text{Sym}(n)$, and leverage algorithms from graph theory to ascertain whether the transpositions generate $\text{Sym}(n)$. I cover some of this in my lecture notes for Theory of Computation (http://people.math.sc.edu/mlevet/Lecture_Notes.pdf). Godsil and Royle's text on Algebraic Graph Theory is a more comprehensive resource.

A third thought that comes to mind is the study of graph homomorphisms. In particular, a proper coloring of a graph is a graph homomorphism. If the graph $X$ has chromatic number $c$, then there exists a graph homomorphism $\phi : X \to K_{c}$. So there is a direct connection to complexity theory here, as graph coloring is NP-Hard. Constraint satisfaction problems are generalizations of this, where we study homomorphisms of relational structures of finite rank. Constraint satisfaction problems arise frequently in artificial intelligence. Algebraic techniques (granted, not senior level algebra techniques) arise in the study of CSPs. The Dichotomy Conjecture states that every CSP is either in $P$ or NP-Complete. Dimitry Zhuk recently presented his results at a two-week conference at Vanderbilt in September, settling the conjecture in the affirmative. While he has not put forth a paper, the folks there were relatively convinced of his results (including the logician at my school, who was a Tarski student).

Automata Theory: The set of regular languages forms a Kleene semi-ring over the operations of set union (addition) and concatenation (multiplication). The notion of the derivative also comes up. See the Brzozowski Derivative. We can leverage this fact to design a linear-algebraic algorithm (see the Brzozowski Algebraic Method) to convert a FSM to a regular expression. (I also cover this in my notes.)

Complexity Theory: Babai's recent result on the Graph Isomorphism problem is (or should be) the poster child for algebra in CS. He uses a lot of techniques such as group actions and representation theory to study problems like Graph Isomorphism and Group Isomorphism. Regarding the Group Isomorphism problem, it is undecidable in the general case. For finite groups, the best bound is something like $n^{\text{log}(n)}$ (which is easy to obtain, simply by showing that every finite group has a generating set with at most $\text{log}_{2}(n)$ generators). For abelian groups, Group Isomorphism has an $O(n)$ time solution. For solvable groups, the $n^{\text{log}(n)}$ bound has been relatively untouched. Babai and Qiao presented a polynomial time isomorphism test for groups with abelian Sylow towers. To the best of my knowledge (which is limited), this is the largest class of solvable groups to date with a polynomial time isomorphism test.

A lot of the techniques Babai uses are quite heavy handed. If these types of problems catch your interest, I might spend some time on representation theory. I think a solid handle on group theory (comfort with the Sylow Theorems) and abstract linear algebra, along with motivation, is sufficient to pick up a book in the area. There are several books on Representation Theory from a combinatorial perspective. Sagan's text on the Symmetric Group is one text that comes to mind. Babai also teaches classes on Algorithms in Finite Groups, which you can find on his home page. Computational Group Theory is the key phrase, if you decide to look into this area more.

Cryptology, Coding Theory, and Computational Number Theory: These are the obvious ones, and have been covered quite well.

share|cite|improve this answer
    
Thanks: this is an informative answer. Could you please clarify the statement that "group isomorphism is undecidable in the general [infinite] case"? How do you define the decision problem with infinite inputs? Is it something like "take as input ZFC-descriptions of the two groups"? (In that case, undecidability makes sense, as you could trivially embed the halting problem into the group operation.) Or is this something else? – wchargin 14 hours ago
    
We deal with finitely representable groups. So the input string for the Turing Machine is finite. The fact that the group Isomorphism problem is undecidable follows from the undecidability of the free group problem (we reduce the free group problem to group Isomorphism). – ml0105 14 hours ago
    
Ah, finitely representable is the key. Thanks. – wchargin 14 hours ago
    
I think this answer is the winner or me, despite all of them being superb. I'm starting representation theory next semester and would have never known about that application. I was primarily interested in it for physics applications, but now with CS applications that's even better. Think I'll explore that one further. Thank you for taking the time to write all of this up! – fluffy_muffin 13 hours ago

Here is a link to a seminar course given at KTH, Stockholm, in 2014, the topic of the semester being "Algebraic Gems in TCS". There are several links to other, similar courses on the bottom of the page, as well as links and references to some relevant literature a bit higher up:

http://www.csc.kth.se/utbildning/kth/kurser/DD2442/semteo14/

As regards automata theory and algebra, there are interesting connections between automata and what is known in algebra as semigroups (the difference between a semigroup and a group being that semigroups don't need identity elements or inverse elements), and one of the main links between the subjects is the syntactic monoid (see https://en.wikipedia.org/wiki/Syntactic_monoid for more information). You can also try looking out for resources covering algebraic automata theory, apparently also known as Krohn-Rodes Theory:

https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory

Depending on one's viewpoint, one may consider calling category theory a branch of algebra, and categories have found quite a lot of application especially in connection with functional programming. First courses in algebra typically don't cover categories though, but if you'd like some reference to catch up, Awodey's book is quite pleasant to read and presents some connections to computer science. But you could probably google "Category theory for computer science" as well, and end up finding really good notes for free :)

And just like Chas Brown states, cryptography is another part of CS where algebra has found applications. You will probably find a lot of group and field theory applied there.

And then there is the entire subject of coding theory, but judging from the post you already knew about this.

And then I guess there is at least some more that I can't think of right now…but hopefully this should keep you busy :)

share|cite|improve this answer

The theory of Grobner Basis is a way to solve simultaneous multivariable polynomial equations. The key component underlying this is something called Buchberger's Algorithm (which given an ideal $I$ of some ring $R = k[x_1,\dots,x_n]$) computes the Grobner basis, an especially nice way to represent the ideal that makes it MUCH easier to answer certain questions about it).

The algorithm is interesting in that naively it's not clear that it terminates. Fortunately, by working with something called a monomial order we can guarantee that at each iteration of the algorithim we get "closer" to a solution, so it must terminate.

share|cite|improve this answer

some functional programming languages use monads.

share|cite|improve this answer

Cryptography (including elliptic curve cryptography) comes to mind.

I suppose one could also argue that for performance reasons, a lot of problems get simplified by some sort of group action on the data set under consideration to reduce the size of the problem set to something more manageable; so it certainly doesn't hurt to have a good grounding in the basics.

share|cite|improve this answer

Can we call the entire field of symbolic computing an application? Or is that too meta?

share|cite|improve this answer

Algebra (and in particular, universal algebra) is also used in semantics of programming languages. A simple example of this is process algebra, of which there exist many different ones for different application areas. Broadly speaking, a universal algebra gives you rules for generating syntactically correct strings in whatever language you are considering.

A key insight due to Turi and Plotkin is that while the algebraic side of things describe what strings can be formed in a language, the coalgebraic side of things describe the semantics of the language.

share|cite|improve this answer

The hidden subgroup problem lends itself well to quantum algorithms. Solving it for Abelian groups yields polynomial time prime factorization (Shor's algorithm); some other groups would provide similarly powerful results (e.g. the hidden subgroup problem on the symmetric group is equivalent to the graph isomorphism problem). That makes algebra (specifically representation theory) an important tool in quantum computing.

share|cite|improve this answer

Error correcting codes are another thing that comes up in eecs. You want to send k symbols without error so you add some redundant symbols and are able to correct some number of errors. Examples are things like reed Solomon codes, hamming codes, etc.

share|cite|improve this answer

An additional note on Cayley Graphs: There has been a great deal of research on using Cayley Graphs in designing communication architectures for parallel & distributed computation, in which each vertex represents a separate processor and each edge represents a communication link between two processors. Typical research topics include maximizing the number of processors given a fixed number of links per processor, finding effective transmission schemes that minimize communication overhead (i.e traffic congestion), and designing algorithms for important computation tasks which map efficiently onto different network topologies.

share|cite|improve this answer
    
Is graph theory a branch of abstact algebra? I doubt it. – Alex M. 1 hour ago

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.