• Query Execution

      It would be nice to leverage the available and ever growing GPU resources available in machines in some fashion. It makes the most sense for heavy OLAP/Analytical queries – but the biggest internal feature we'd need to add first is support for parallel query processing. Then the single user request can be divided up into N parts and executed in parallel by multiple threads, subsystems, or even [remote] processes. Then building on that, we could determine which parts would naturally perform well on GPU's – things like aggregations, sorting, grouping, etc. – and send that portion of the work to the GPU's using something like the CUDA API. 

            Assignee:
            backlog-query-execution [DO NOT USE] Backlog - Query Execution
            Reporter:
            dev.ashevchuk@gmail.com Andrew Shevchuk
            Votes:
            2 Vote for this issue
            Watchers:
            12 Start watching this issue

              Created:
              Updated: