As a user, not developer, triggers are a point of concern. Obviously not using them should incur no overhead, but they're very non-trivial to implement.
- Pre- and post- hook points. (With pre- hooks potentially allowing the operation to be invalidated, i.e. pre-update indicating the update should not proceed.)
- For all hooks, the document must be passed. It's not enough to say "ID X is about to be deleted", the trigger logic will need to be able to inspect the document.
- Many triggers may want to perform larger-scale operations, such as delivering e-mail in response to a record change. Clearly, mongod isn't the right place to be doing that, so you might inject a "task" record of some kind which an external process watches for… at which point you might as well be doing the "trigger" monitoring there anyway.
- The last point also leads to the potential for database-level amplification attacks. (Requiring careful coding.)
- Using the new document validation mechanism as a method of filtering which operations execute a given trigger would then require iteration of candidates and repeated evaluation; not so efficient, with that overhead added to every call of every operation that is hooked.
- Like validation hooks would require some method to bypass, plus assorted tool changes to control trigger execution during import/restore.
Pre-aggregation via upsert operations is effectively the "view" process in MongoDB. Pairing standard inserts with their pre-aggregate updates typically works well for that without the need to asynchonrously divorce the pre-aggregate update from the insert, no? This very much sounds like a problem solved at the client driver (application) level and not server-side.