In this PR:
## Improve recompute metadata cache performance. We are aiming for
~100ms
Deleting relationMetadata table and FKs pointing on it
Fetching indexMetadata and indexFieldMetadata in a separate query as
typeorm is suboptimizing
## Remove caching lock
As recomputing the metadata cache is lighter, we try to stop preventing
multiple concurrent computations. This also simplifies interfaces
## Introduce self recovery mecanisms to recompute cache automatically if
corrupted
Aka getFreshObjectMetadataMaps
## custom object resolver performance improvement: 1sec to 200ms
Double check queries and indexes used while creating a custom object
Remove the queries to db to use the cached objectMetadataMap
## reduce objectMetadataMaps to 500kb
<img width="222" alt="image"
src="https://github.com/user-attachments/assets/2370dc80-49b6-4b63-8d5e-30c5ebdaa062"
/>
We used to stored 3 fieldMetadataMaps (byId, byName, byJoinColumnName).
While this is great for devXP, this is not great for performances.
Using the same mecanisme as for objectMetadataMap: we only keep byIdMap
and introduce two otherMaps to idByName, idByJoinColumnName to make the
bridge
## Add dataloader on IndexMetadata (aka indexMetadataList in the API)
## Improve field resolver performances too
## Deprecate ClientConfig
We need to use twentyORMManager and not twentyORMGlobalManager in rest
api base handler, because we don't want to bypass permissions using
`shouldBypassPermissions` parameter (which we would have to do to use
twentyORMGlobalManager).
ScopedWorkspaceContextFactory was not adapted to rest api requests which
form differs from graphql request.
Fix the following error:
Cannot use a pool after calling end on a pool
<img width="917" alt="Screenshot 2025-05-17 at 14 56 18"
src="https://github.com/user-attachments/assets/63081831-9a7e-4633-8274-de9f8a48dbae"
/>
The problem was that the datasource manager was destroying the
connections when a datasource cache expired.
This PR implements a global PostgreSQL connection pool sharing
mechanism.
- Patches pg.Pool to reuse connection pools across the application when
connection parameters match, reducing resource overhead.
- New environment variables allow enabling/disabling sharing and
configuring pool size, idle timeout, and client exit behavior.
WorkspaceDatasourceFactory will now use shared pools if enabled, this
will avoid recreating 10 connections for each pods for each workspace.
---------
Co-authored-by: Charles Bochet <charlesBochet@users.noreply.github.com>
Follow-up on https://github.com/twentyhq/twenty/pull/12007
In this PR
- adding a filter on HttpExceptionHandlerService to filter out 4xx
errors from driver handling (as we do for graphQL errors: see
useGraphQLErrorHandler hook - only filteredIssues are sent to`
exceptionHandlerService.captureExceptions()`.)
- grouping together more missing metadata issues
- attempting to use error codes as issues names in sentry to improve UI;
for now it says "Error" all the time
As discussed with @Weiko
Even though we cache the datasource, the connection expire after
10minutes in TypeORM, that might be the reason why our app is spamming
the proxy asking for connections. Also lowering the pool size.
This PR attemps at improving sentry grouping and filtering by
- Using the exceptionCode as the fingerprint when the error is a
customException. For this to work in this PR we are now throwing
customExceptions instead of internalServerError deprived of their code.
They will still be converted to Internal server errors when sent back as
response
- Filtering 4xx issues where it was missing (for emailVerification
because errors were not handled, for invalid captcha and billing errors
because they are httpErrors and not graphqlErrors)
---------
Co-authored-by: Félix Malfait <felix@twenty.com>
This is a first PR to remove old relation logic
Next steps:
- remove relationMetadata from cache
- remove relationMetadata table content and structure
- refactor relationDefinition to leverage field.settings instead
In this PR we are
- introducing a cached map `{ userworkspaceId: roleId } `to reduce calls
to get a userWorkspace's role (we were having N+1 around that with
combinedFindMany queries and generally having a lot of avoidable
queries)
- using the roles permissions cache (`{ roleId: { objectNameSingular:
{ canRead: bool, canUpdate: bool, ...} } `) in Permissions V1's
userHasObjectPermission, in order to 1) improve performances to avoid
calls to get roles 2) start using our permissions cache
In this PR we are
- (if permissionsV2 is enabled) executing permission checks at query
builder level. To do so we want to override the query builders methods
that are performing db calls (.execute(), .getMany(), ... etc.) For now
I have just overriden some of the query builders methods for the poc. To
do so I created custom query builder classes that extend typeorm's query
builder (selectQueryBuilder and updateQueryBuilder, for now and later I
will tackle softDeleteQueryBuilder, etc.).
- adding a notion of roles permissions version and roles permissions
object to datasources. We will now use one datasource per roleId and
rolePermissionVersion. Both rolesPermissionsVersion and rolesPermissions
objects are stored in redis and recomputed at role update or if queried
and found empty. Unlike for metadata version we don't need to store a
version in the db that stands for the source of truth. We also don't
need to destroy and recreate the datasource if the rolesPermissions
version changes, but only to update the value for rolesPermissions and
rolesPermissionsVersions on the existing datasource.
What this PR misses
- computing of roles permissions should take into account
objectPermissions table (for now it only looks at what's on the roles
table)
- pursue extension of query builder classes and overriding of their db
calling-methods
- what should the behaviour be for calls from twentyOrmGlobalManager
that don't have a roleId?
## Context
This fix ensures that even if a datasource creation promise throws and
is cached, subsequent requests won't return that cached exception.
Also adding a TTL on MetadataObjectMetadataOngoingCachingLock, this is
not something that should stay in the cache forever and could
potentially unlock some race conditions (the origin of the issue is
probably due to performances where the lock is not removed as it should
be after metadata computation and caching)
# Introduction
In this PR we've migrated `twenty-shared` from a `vite` app
[libary-mode](https://vite.dev/guide/build#library-mode) to a
[preconstruct](https://preconstruct.tools/) "atomic" application ( in
the future would like to introduce preconstruct to handle of all our
atomic dependencies such as `twenty-emails` `twenty-ui` etc it will be
integrated at the monorepo's root directly, would be to invasive in the
first, starting incremental via `twenty-shared`)
For more information regarding the motivations please refer to nor:
- https://github.com/twentyhq/core-team-issues/issues/587
-
https://github.com/twentyhq/core-team-issues/issues/281#issuecomment-2630949682
close https://github.com/twentyhq/core-team-issues/issues/589
close https://github.com/twentyhq/core-team-issues/issues/590
## How to test
In order to ease the review this PR will ship all the codegen at the
very end, the actual meaning full diff is `+2,411 −114`
In order to migrate existing dependent packages to `twenty-shared` multi
barrel new arch you need to run in local:
```sh
yarn tsx packages/twenty-shared/scripts/migrateFromSingleToMultiBarrelImport.ts && \
npx nx run-many -t lint --fix -p twenty-front twenty-ui twenty-server twenty-emails twenty-shared twenty-zapier
```
Note that `migrateFromSingleToMultiBarrelImport` is idempotent, it's atm
included in the PR but should not be merged. ( such as codegen will be
added before merging this script will be removed )
## Misc
- related opened issue preconstruct
https://github.com/preconstruct/preconstruct/issues/617
## Closed related PR
- https://github.com/twentyhq/twenty/pull/11028
- https://github.com/twentyhq/twenty/pull/10993
- https://github.com/twentyhq/twenty/pull/10960
## Upcoming enhancement: ( in others dedicated PRs )
- 1/ refactor generate barrel to export atomic module instead of `*`
- 2/ generate barrel own package with several files and tests
- 3/ Migration twenty-ui the same way
- 4/ Use `preconstruct` at monorepo global level
## Conclusion
As always any suggestions are welcomed !
Simplifying a lot the upgrade system.
New way to upgrade:
`yarn command:prod upgrade`
New way to write upgrade commands (all wrapping is done for you)
```
override async runOnWorkspace({
index,
total,
workspaceId,
options,
}: RunOnWorkspaceArgs): Promise<void> {}
```
Also cleaning CommandModule imports to make it lighter
Proposal:
- Add a method in ActiveWorkspaceCommand to loop over workspace safely
(add counter, add try / catch, provide datasource with fresh cache,
destroy datasource => as we do always do it)
Also in this PR:
- make sure we clear all dataSources (and not only the one on metadata
version in RAM)
We had to remove soft-deletion on default filters due to the missing
indexes. We now generate composite indexes with the foreign key
containing the deletedAt column as well which should improve
performances
Looks like insert() does not return foreign keys. We could eventually
call findMany after but it seems that's what save() is doing so I'm
replacing insert with save.
```typescript
/**
* Flag to determine whether the entity that is being persisted
* should be reloaded during the persistence operation.
*
* It will work only on databases which does not support RETURNING / OUTPUT statement.
* Enabled by default.
*/
reload?: boolean;
```
Note: save() also does an upsert by default with no way to configure
that so if we want to keep that behaviour we will need to add a check
before
```typescript
if (args.upsert) {
const existingRecords = await repository.findBy({
id: Any(args.data.map((record) => record.id)),
});
...
```
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
- avoid failing when missing cache (used for command)
- remove unused load cache function. Cache will be always re-created
when trying to fetch if not existing
## Context
We currently have a race condition when dealing with datasource
creation. This happen when multiple queries arrive at the same time (for
example graphql dataloaders) and the datasource is not created yet.
Since the datasource is stored in memory this can happen more often as
well and they were all triggering the datasource creation at the same
time.
I'm trying to fix the issue with promise memoization. Now, instead of
caching the datasource only, we also want to cache the promise of the
datasource creation and make the creation itself synchronous.
More info about promise memoization in this article for example:
https://www.jonmellman.com/posts/promise-memoization
Co-authored-by: Charles Bochet <charlesBochet@users.noreply.github.com>
In this PR:
- removing ugprade-0.24 commands as we are releasing 0.30
- introducing cache:flush command
- refactoring upgrade command and sync-metadata command to use the
ActiveWorkspacesCommand so they consistently run on all workspaces or
selected workspaces
Fixes:
- clear localStorage on sign out
- fix missing workspaceMember in verify resolver
- do not throw on datasource already destroyed exception which can
happen with race condition when several resolvers are resolving in
parallel
In this PR:
1. Refactor guards to avoid duplicated queries: WorkspaceAuthGuard and
UserAuthGuard only check for existence of workspace and user in the
request without querying the database
This PR introduces the following changes:
- add the metadataVersion to all our metadata cache keys to ease
troubleshooting:
<img width="1146" alt="image"
src="https://github.com/user-attachments/assets/8427805b-e07f-465e-9e69-1403652c8b12">
- introduce a cache recompute lock to avoid overloading the database to
recompute the cache many time
## Context
The goal is to replace pg_graphql with our own ORM wrapper (TwentyORM).
This PR tries to add some parsing logic to convert graphql requests to
send to the ORM to replace pg_graphql implementation.
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
At field creation we are checking the availability of the name by
comparing it to the other fields' names' on the object; but for
composite fields the fields' names' as indicated in the repository do
not exactly match the column names' on the tables (e.g "createdBy" field
is actually represented by columns createdByName, createdBySource etc.).
In this PR we prevent the conflict with the standard composite fields'
names.
There is still room for errors with the custom composite fields: for
example a custom composite field "address" of type address on a custom
object "listing" will introduce the columns addressAddressStreet1,
addressAddressStreet2 etc. while we won't prevent the user from later
creating a custom field named "addressAddressStreet1".
For now I decided not to tackle this as this seem extremely edgy + would
impact performance on creation of all fields while never actually useful
(I think).
We have found the root cause of the issue:
- when using a datasource (including the cached ones), we are fetching
ObjectMetadataCollection from cache (700kB). Datasource usage is
happening any time we are using twentyORM, which is everywhere in the
jobs and in some resolvers (including the GetCurrentUser one). This is
leading to a high load on redis and leading to the performance issues we
are seeing.
- we actually don't need to fetch this objectMetadataCollection while
using a cached datasource, only when we instantiate a new one
## Context
As we grow, the messaging scripts are experiencing performance issues
forcing us to temporarily disable them on the cloud.
While investigating the performance, I have noticed that generating the
entity schema (for twentyORM) in the repository is taking ~500ms locally
on my Mac M2 so likely more on pods. Caching the entitySchema then!
I'm also clarifying naming around schemaVersion and cacheVersions ==>
both are renamed workspaceMetadataVersion and migrated to the workspace
table (the workspaceCacheVersion table is dropped).
Implement soft delete on standards and custom objects.
This is a temporary solution, when we drop `pg_graphql` we should rely
on the `softDelete` functions of TypeORM.
---------
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
Co-authored-by: Lucas Bordeau <bordeau.lucas@gmail.com>
Calling `getObjectMetadata` from `WorkspaceCacheStorageService` in every
query was causing big performance issues. The `objectMetadataCollection`
is now part of the `WorkspaceInternalContext` so we only instance it
once in the `WorkspaceDatasourceFactory`.
Queries are now much faster, for instance for TimelineCalendar, it went
from ~450ms to 80ms.
In this PR:
- adding Favorites to Tasks and Notes
- fixing inconsistencies between custom object creation and sync of
standard fields of custom objects
- fixing workspaceCacheVersion not used to invalidate existing
datasource