There have been many debates over the years as to the advantages and disadvantages of open source software products with regards to computer security, and this post is not meant to rehash the pros and cons of any of those debates. Although in the interest of full disclosure, I am a proponent of the open source way of doing things and have always agreed with Eric S. Raymond’s conjecture that many eyes make all bugs shallow (even security bugs). Personal beliefs aside however, I have always felt an interesting dichotomy always existed in the belief system of any who favored closed source software in terms of security.
If you ask any security person about giving consideration to a proprietary encryption algorithm, they will instead recommend that you use an established and vetted algorithm like AES. Why? In the field of cryptography, algorithms are only considered cryptographically secure by the cryptographic community after public disclosure of the algorithm and extensive peer review. Algorithms such as AES, Twofish, RSA, etc are all public knowledge and that has not served to lessen their security, but actually served to provide evidence of their security. No security professional would trust an algorithm that has not been through such a vetting process.
Yet, there are many security professionals that don’t seem to hold software to the same standards that they would apply to a cryptographic algorithm, and rather consider security through obscurity a benefit in this case. Why is vetting and peer review deemed so important for one and not for the other by some? While I can understand the desire for keeping things proprietary for business motives, it is much harder to understand a security related case for source code secrecy (outside of certain niche cases like the algorithms used to generate the numbers for secure tokens, etc), given the lengthier experience of the cryptographic community.