It could be the case that you have been running your FogBugz installation for a long time and then, all of the sudden, one day you start to experience some slowness accessing cases and even getting API errors 500 or even timeouts when trying to open some of them.
You may get an error similar to the following:
JSON API ERROR: Could not load /api/0/notifications/20
500 Internal Error
If you have discarded all other known possibilities, rebuilding the FogBugz Database indexes could be your solution.
One quick way to determine if the Database is having a performance degradation is to do a select query to the FogBugz database (via MSSQL Manager) for the latest bug cases, bugevent or the attachment table: if the system takes more than 30 seconds to return the results for the top 500 rows, then you know the SQL Database performance has been degraded. Example:
Select top 500 * from attachment
(Probable) Root Cause
Table index performance degradation is a lesser-known issue that happens on rare occasions on FogBugz installations. Our records show this affects mainly FogBugz installation running for years and having a high number of records (Example: >500K BugEvent records). Also, we have seen this performance issue happening in low capacity environment as well.
In general, if you are running a SQL database for a long time, it can happen that the table indexes become fragmented on disk, causing the database engine to become slower as the database and indexes grows, until reaching a point when the FogBugz queries might take longer than 30 secs to resolve, resulting in unwanted timeouts and API error 500 in the interface.
Of course, the above has some caveats: underpowered servers are more likely to display the symptoms mentioned above than a high power server. Having multiple cores for the database, large memory and an SSD disk for database file storage can definitely decrease the chances of experiencing this situation. However, even the most powerful server will be affected by slow performance if the indexes get corrupted (by any reason).
If you suddenly are experiencing SQL degraded performance when opening cases or trying to access filters, and have discarded all other known issues causing Slow Performance, you may try to run a manual database index rebuild process on each table/index of the FogBugz database.
How to rebuild the indexes (the right way)
Before jumping to the script, please follow these steps for a safer experience:
1) Issue a FogBugz database backup before running the index rebuild task. It is highly recommended to make a backup (or take a snapshot if you are working on a virtual environment) before making any change to production data.
2) Run a database reporting to identify the size of the data currently stored in your SQL server:
Right Click on the FogBugz Table > Reports > Standard Reports > Disk Usage by Table. It should give you something like this:
Reference: Identifying the Size of FogBugz Database
If the index rebuild is successful, you should see an important size change when comparing the indexes before and after the rebuilding process, usually reducing them.
It is highly recommended to configure a routine maintenance task on the SQL server to run a scheduled SQL Server index defragmentation process on the FogBugz Database:
While not a mandatory task, and definitely something that escapes FogBugz scope of support, the above is a great server administration practice that (if well defined) can have a major performance improvement on FogBugz while having a minimum downtime, keeping your database file working at top performance and preventing any of the unwanted symptoms of a low-performance database system.