Recently we started working on a project for a client that incorporates a substantial amount of complexity into our current optimization system. This will replace a process where this client had a second system that ran before ours and created a sub-optimal solution for this process. By incorporating it into our process we should be able to create a more optimal solution by leveraging the increased knowledge we have when our optimization system is run.
There is just one problem with adding all of this complexity; they don’t want the run time to increase. This is the part that is causing my new model blues. I’m trying to find the best trade off, because I can find a better solution but the run time exceeds the amount of time that the data is valid or I can create a solution that is roughly in line with what was being done in separate systems previously in about the same run time.
Have any of you run into a similar issue? Were you able to find a suitable trade off?