If your FogBugz installation has been running for extended periods, you might eventually start to experience some slowness when accessing cases and encounter multiple JSON API errors or 500 Internal Error timeouts. You may encounter a similar error like this:
|JSON API ERROR: Could not load /api/0/notifications/20
500 Internal Error
If you have discarded all other known possibilities, this article outlines the steps on how to rebuild the FogBugz database indexes when your FogBugz system is running slow.
One quick way to determine if the database is experiencing a decline in performance is to perform a select query on the FogBugz database (via MSSQL Manager) for the latest bug cases, BugEvent, or the attachment table.
Select top 500 * from attachment
Note: If the system takes more than 30 seconds to get results for the first 500 rows, then it indicates that the SQL database performance has been degraded.
Table index performance degradation is a lesser-known issue that rarely happens on FogBugz installations. This issue mainly affects FogBugz installations running for several years with a high amount of records (i.e., more than 500,000 BugEvent records), which might also happen in low-capacity environments.
Running an SQL database for a long time will make the table indexes fragmented on the disk. The database engine will start to run slower as the database and indexes grow until FogBugz queries take longer than 30 seconds to resolve. This contributes to unwanted timeouts and JSON API errors on the interface.
Of course, there are exceptions. Underpowered servers are more likely to display the symptoms mentioned above than a high-power server. Having multiple cores, a large memory, and an SSD disk for the database file storage can decrease the chances of experiencing this situation. However, even the most powerful servers are bound to experience slow performance if the indexes get corrupted (for any reason).
If you are experiencing a decline in performance in your SQL database, especially when opening cases or accessing filters, you may run a manual database index rebuild process on each table/index on the FogBugz database. Please only proceed when you have already discarded all other known issues causing slow performance.
Rebuilding the Indexes
Before jumping into the script, please follow these steps for a safer execution:
Issue a FogBugz database backup before running the index rebuild task. It is highly recommended to make a backup (or take a snapshot, if you are working on a virtual environment) before making any changes to the production data.
Run a database report to identify the size of the data currently stored in your SQL server. Right-click on the FogBugz Table option and select Reports > Standard Reports > Disk Usage by Table:
Reference: Identifying the Size of FogBugz Database
Run the attached SQL script below and wait for it to finish. Make sure to change
Trial1to the name of your FogBugz database.
Re-run the report.
Note: If the index rebuild is successful, you should see a significant change in size when comparing the indexes before and after the rebuilding process (the index size is usually reduced).
It is highly recommended to configure a routine maintenance task on your SQL server to run a scheduled SQL server index defragmentation process on the FogBugz Database.
While it is not mandatory (and out of FogBugz's scope of support), it is an excellent server administration practice that can give significant performance improvements on FogBugz with minimum downtime. This practice keeps your database file working at top performance and prevents any of the unwanted symptoms of a low-performance database system.