The impending introduction of self-driving cars poses a new stage of complexity not only in technical requirements but in the ethical challenges it evokes. The question of which ethical principles to use for the programming of crash algorithms, especially in response to so-called dilemma situations, is one of the most controversial moral issues discussed. This paper critically investigates the rationale behind rule utilitarianism as to whether and how it might be adequate to guide ethical behaviour of autonomous cars in driving dilemmas. Three core aspects related to the rule utilitarian concept are discussed with regards to their relevance for the given context: the universalization principle, the ambivalence of compliance issues, and the demandingness objection. It is concluded that a rule utilitarian approach might be useful for solving driverless car dilemmas only to a limited extent. In particular, it cannot provide the exclusive ethical criterion when evaluated from a practical point of view. However, it might still be of conceptual value in the context of a pluralist solution.