The computational complexity of neural networks is considered from a theoretical point of view. The author shows that neural networks with symmetric weights possess the same computational power as general networks with asymmetric weights. The main theorem of the paper states that any neural circuit can be simulated by a symmetric neural network of the same size (that is, with the same number of neurons); the computation time of the symmetric network (the time sufficient for achieving a stable set) depends on the depth of the asymmetric network that is simulated. As a consequence, as far as their computational power is concerned, symmetric neural networks are comparable with the existing models of parallel computation. The presentation is clear even though the proofs are only sketched.