In this PR, I'm:
- adding createdBy field (type ACTOR) on custom objects when created
- moving `name` and `position` default column to the set of columns
automatically creation on object creation
- fixing a bug on mutations (update / create), if the targetted object
has a 'data' custom field, it was conflicting with the payload ==> I
feel we need to refactor this part of the code but we can keep this for
a bit later as we plan to move out of pg_graphql
<img width="1198" alt="image"
src="https://github.com/user-attachments/assets/891c4a97-bab1-415c-8551-dabd5996a794">
This pull request introduces a new `FieldMetadataType` called `ACTOR`.
The primary objective of this new type is to add an extra column to the
following objects: `person`, `company`, `opportunity`, `note`, `task`,
and all custom objects.
This composite type contains three properties:
- `source`
```typescript
export enum FieldActorSource {
EMAIL = 'EMAIL',
CALENDAR = 'CALENDAR',
API = 'API',
IMPORT = 'IMPORT',
MANUAL = 'MANUAL',
}
```
- `workspaceMemberId`
- This property can be `undefined` in some cases and refers to the
member who created the record.
- `name`
- Serves as a fallback if the `workspaceMember` is deleted and is used
for other source types like `API`.
### Functionality
The pre-hook system has been updated to allow real-time argument
updates. When a record is created, a pre-hook can now compute and update
the arguments accordingly. This enhancement enables the `createdBy`
field to be populated with the correct values based on the
`authContext`.
The `authContext` now includes:
- An optional User entity
- An optional ApiKey entity
- The workspace entity
This provides access to the necessary data for the `createdBy` field.
In the GraphQL API, only the `source` can be specified in the
`createdBy` input. This allows the front-end to specify the source when
creating records from a CSV file.
### Front-End Handling
On the front-end, `orderBy` and `filter` are only applied to the name
property of the `ACTOR` composite type. Currently, we are unable to
apply these operations to the workspace member relation. This means that
if a workspace member changes their first name or last name, there may
be a mismatch because the name will differ from the new one. The name
displayed on the screen is based on the workspace member entity when
available.
### Missing Components
Currently, this PR does not include a `createdBy` value for the `MAIL`
and `CALENDAR` sources. These records are created in a job, and at
present, we only have access to the workspaceId within the job. To
address this, we should use a function similar to
`loadServiceWithContext`, which was recently removed from `TwentyORM`.
This function would allow us to pass the `authContext` to the jobs
without disrupting existing jobs.
Another PR will be created to handle these cases.
### Related Issues
Fixes issue #5155.
### Additional Notes
This PR doesn't include the migrations of the current records and views.
Everything works properly when the database is reset but this part is
still missing for now. We'll add that in another PR.
- There is a minor issue: front-end tests are broken since this commit:
[80c0fc7ff1).
---------
Co-authored-by: Lucas Bordeau <bordeau.lucas@gmail.com>
Co-authored-by: Charles Bochet <charles@twenty.com>
## Context
We recently introduced the new twenty ORM and used it in the update
methods in the query runner.
Initially we were using pg_graphql to fetch the record before updating
it allowing us to compare the before and the after and create a diff.
This diff is then used for the timeline activity creation. Now,
twentyORM is doing the fetch and pg_graphql is still doing the update
and their responses are not exactly the same, which means the diff is
not working as intended (e.g date types were always in the diff due to
one being in Date format and the other as a string)
This PR introduces a updatedFields property to the update event which
comes from the input. This is not ideal as this won't work for API users
that send the whole payload but will be sufficient enough for our FE
that only sends modified fields. We then compare only those fields in
the diff.
- Throw service error from query runner
- Catch in resolver factories
- Map to graphql errors
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
### Overview
This PR builds upon #5153, adding the ability to get a repository for
custom objects. The `entitySchema` is now generated for both standard
and custom objects based on metadata stored in the database instead of
the decorated `WorkspaceEntity` in the code. This change ensures that
standard objects with custom fields and relations can also support
custom objects.
### Implementation Details
#### Key Changes:
- **Dynamic Schema Generation:** The `entitySchema` for standard and
custom objects is now dynamically generated from the metadata stored in
the database. This shift allows for greater flexibility and
adaptability, particularly for standard objects with custom fields and
relations.
- **Custom Object Repository Retrieval:** A repository for a custom
object can be retrieved using `TwentyORMManager` based on the object's
name. Here's an example of how this can be achieved:
```typescript
const repository = await this.twentyORMManager.getRepository('custom');
/*
* `repository` variable will be typed as follows, ensuring that standard
fields and relations are properly typed:
* const repository: WorkspaceRepository<CustomWorkspaceEntity & {
* [key: string]: any;
* }>
*/
const res = await repository.find({});
```
Fix#6179
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
Co-authored-by: Weiko <corentin@twenty.com>
## Context
We've created a yoga (gql server) hook that catches requests and cache
them when needed. In practice we use it on the "objects" query because
this is often queried on the FE and it should never return something
different unless the schema has been intentionally changed by the user
when editing their data model (updating objects, fields, etc).
The issue here is we always cache the response regardless of its result,
even when it fails. This PR fixes that behaviour by only caching the
query response if it is successful.
I'm also fixing the cache key because the signature let users put
multiple operations and the cache key was not taking this into account
(we always use it on only one operation but we might have issues in the
future because another operation response could have erased the cached
response of another). Now the cache key contains the name of the
operation as well.
## Test
tested locally by manually throwing an error in the JWT auth guard
- Refactor connected account module
- Move blocklist into it's own module
- Move contact-creation-manager into it's own module
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
We call convertExceptionToGraphQLError in the exception handler for http
exceptions but we don't take into account those that already are
graphqlErrors and because of that the logic of convertExceptionToGraphql
is to fallback to a 500.
Now if the exception is a BaseGraphqlError (custom graphql error we
throw in the code), we throw them directly.
BEFORE
<img width="957" alt="Screenshot 2024-07-12 at 15 33 03"
src="https://github.com/user-attachments/assets/22ddae13-4996-4ad3-8f86-dd17c2922ca8">
AFTER
<img width="923" alt="Screenshot 2024-07-12 at 15 32 01"
src="https://github.com/user-attachments/assets/d3d6db93-6d28-495c-a4b4-ba4e47d45abd">
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
Added:
- An "Ask AI" command to the command menu.
- A simple GraphQL resolver that converts the user's question into a
relevant SQL query using an LLM, runs the query, and returns the result.
<img width="428" alt="Screenshot 2024-06-09 at 20 53 09"
src="https://github.com/twentyhq/twenty/assets/171685816/57127f37-d4a6-498d-b253-733ffa0d209f">
No security concerns have been addressed, this is only a
proof-of-concept and not intended to be enabled in production.
All changes are behind a feature flag called `IS_ASK_AI_ENABLED`.
---------
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
This PR introduces an `upsert` parameter (along the existing `data`
param) for `createOne` and `createMany` mutations.
When upsert is set to `true`, the function will look for records with
the same id if an id was passed. If not id was passed, it will leverage
the existing duplicate check mechanism to find a duplicate. If a record
is found, then the function will perform an update instead of a create.
Unfortunately I had to remove some nice tests that existing on the args
factory. Those tests where mostly testing the duplication rule
generation logic but through a GraphQL angle. Since I moved the
duplication rule logic to a dedicated service, if I kept the tests but
mocked the service we wouldn't really be testing anything useful. The
right path would be to create new tests for this service that compare
the JSON output and not the GraphQL output but I chose not to work on
this as it's equivalent to rewriting the tests from scratch and I have
other competing priorities.
#### Overview
This PR introduces a new API for dynamically registering and executing
pre and post query hooks in the Workspace Query Hook system using the
`@WorkspaceQueryHook` decorator. This approach eliminates the need for
manual provider registration, and fix the issue of `undefined` or `null`
repository using `@InjectWorkspaceRepository`.
#### New API
**Define a Hook**
Use the `@WorkspaceQueryHook` decorator to define pre or post hooks:
```typescript
@WorkspaceQueryHook({
key: `calendarEvent.findMany`,
scope: Scope.REQUEST,
})
export class CalendarEventFindManyPreQueryHook implements WorkspaceQueryHookInstance {
async execute(userId: string, workspaceId: string, payload: FindManyResolverArgs): Promise<void> {
if (!payload?.filter?.id?.eq) {
throw new BadRequestException('id filter is required');
}
// Implement hook logic here
}
}
```
This API simplifies the registration and execution of query hooks,
providing a more flexible and maintainable approach.
---------
Co-authored-by: Weiko <corentin@twenty.com>
- Remove filters from metadata rest api
- add limite before and after parameters for metadata
- remove update from metadata relations
- fix typing issue
- fix naming
- fix before parameter
---------
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
Filtering relations is not allowed
(see`packages/twenty-server/src/engine/metadata-modules/relation-metadata/dtos/relation-metadata.dto.ts`)
so we remove filtering for find many relation
we also fixed some bug in result structure and metadata open-api schema
### Overview
This PR introduces significant enhancements to the MessageQueue module
by integrating `@Processor`, `@Process`, and `@InjectMessageQueue`
decorators. These changes streamline the process of defining and
managing queue processors and job handlers, and also allow for
request-scoped handlers, improving compatibility with services that rely
on scoped providers like TwentyORM repositories.
### Key Features
1. **Decorator-based Job Handling**: Use `@Processor` and `@Process`
decorators to define job handlers declaratively.
2. **Request Scope Support**: Job handlers can be scoped per request,
enhancing integration with request-scoped services.
### Usage
#### Defining Processors and Job Handlers
The `@Processor` decorator is used to define a class that processes jobs
for a specific queue. The `@Process` decorator is applied to methods
within this class to define specific job handlers.
##### Example 1: Specific Job Handlers
```typescript
import { Processor, Process, InjectMessageQueue } from 'src/engine/integrations/message-queue';
@Processor('taskQueue')
export class TaskProcessor {
@Process('taskA')
async handleTaskA(job: { id: string, data: any }) {
console.log(`Handling task A with data:`, job.data);
// Logic for task A
}
@Process('taskB')
async handleTaskB(job: { id: string, data: any }) {
console.log(`Handling task B with data:`, job.data);
// Logic for task B
}
}
```
In the example above, `TaskProcessor` is responsible for processing jobs
in the `taskQueue`. The `handleTaskA` method will only be called for
jobs with the name `taskA`, while `handleTaskB` will be called for
`taskB` jobs.
##### Example 2: General Job Handler
```typescript
import { Processor, Process, InjectMessageQueue } from 'src/engine/integrations/message-queue';
@Processor('generalQueue')
export class GeneralProcessor {
@Process()
async handleAnyJob(job: { id: string, name: string, data: any }) {
console.log(`Handling job ${job.name} with data:`, job.data);
// Logic for any job
}
}
```
In this example, `GeneralProcessor` handles all jobs in the
`generalQueue`, regardless of the job name. The `handleAnyJob` method
will be invoked for every job added to the `generalQueue`.
#### Adding Jobs to a Queue
You can use the `@InjectMessageQueue` decorator to inject a queue into a
service and add jobs to it.
##### Example:
```typescript
import { Injectable } from '@nestjs/common';
import { InjectMessageQueue, MessageQueue } from 'src/engine/integrations/message-queue';
@Injectable()
export class TaskService {
constructor(
@InjectMessageQueue('taskQueue') private readonly taskQueue: MessageQueue,
) {}
async addTaskA(data: any) {
await this.taskQueue.add('taskA', data);
}
async addTaskB(data: any) {
await this.taskQueue.add('taskB', data);
}
}
```
In this example, `TaskService` adds jobs to the `taskQueue`. The
`addTaskA` and `addTaskB` methods add jobs named `taskA` and `taskB`,
respectively, to the queue.
#### Using Scoped Job Handlers
To utilize request-scoped job handlers, specify the scope in the
`@Processor` decorator. This is particularly useful for services that
use scoped repositories like those in TwentyORM.
##### Example:
```typescript
import { Processor, Process, InjectMessageQueue, Scope } from 'src/engine/integrations/message-queue';
@Processor({ name: 'scopedQueue', scope: Scope.REQUEST })
export class ScopedTaskProcessor {
@Process('scopedTask')
async handleScopedTask(job: { id: string, data: any }) {
console.log(`Handling scoped task with data:`, job.data);
// Logic for scoped task, which might use request-scoped services
}
}
```
Here, the `ScopedTaskProcessor` is associated with `scopedQueue` and
operates with request scope. This setup is essential when the job
handler relies on services that need to be instantiated per request,
such as scoped repositories.
### Migration Notes
- **Decorators**: Refactor job handlers to use `@Processor` and
`@Process` decorators.
- **Request Scope**: Utilize the scope option in `@Processor` if your
job handlers depend on request-scoped services.
Fix#5628
---------
Co-authored-by: Weiko <corentin@twenty.com>
- Improve the rest api by introducing startingAfter/endingBefore (we
previously had lastCursor), and moving pageInfo/totalCount outside of
the data object.
- Fix broken GraphQL playground on website
- Improve analytics by sending server url
- Removing existing listener that was backfilling created records
without position
- Switch to a job that backfill all objects within workspace
- Adapting `FIND_BY_POSITION` so it can fetch objects without position.
Currently we needed to input a number
- refactor record position factory and record position query factory
- override position if not present during createMany
To avoid overriding the same positions for all data in createMany, the
logic is:
- if inserted last, use last position + arg index + 1
- if inserted first, use first position - arg index - 1
In this PR, I'm refactoring the messaging module into smaller pieces
that have **ONE** responsibility: import messages, clean messages,
handle message participant creation, instead of having ~30 modules (1
per service, jobs, cron, ...). This is mandatory to start introducing
drivers (gmails, office365, ...) IMO. It is too difficult to enforce
common interfaces as we have too many interfaces (30 modules...). All
modules should not be exposed
Right now, we have services that are almost functions:
do-that-and-this.service.ts / do-that-and-this.module.ts
I believe we should have something more organized at a high level and it
does not matter that much if we have a bit of code duplicates.
Note that the proposal is not fully implemented in the current PR that
has only focused on messaging folder (biggest part)
Here is the high level proposal:
- connected-account: token-refresher
- blocklist
- messaging: message-importer, message-cleaner, message-participants,
... (right now I'm keeping a big messaging-common but this will
disappear see below)
- calendar: calendar-importer, calendar-cleaner, ...
Consequences:
1) It's OK to re-implement several times some things. Example:
- error handling in connected-account, messaging, and calendar instead
of trying to unify. They are actually different error handling. The only
things that might be in common is the GmailError => CommonError parsing
and I'm not even sure it makes a lot of sense as these 3 apis might have
different format actually
- auto-creation. Calendar and Messaging could actually have different
rules
2) **We should not have circular dependencies:**
- I believe this was the reason why we had so many modules, to be able
to cherry pick the one we wanted to avoid circular deps. This is not the
right approach IMO, we need architect the whole messaging by defining
high level blocks that won't have circular dependencies by design. If we
encounter one, we should rethink and break the block in a way that makes
sense.
- ex: connected-account.resolver is not in the same module as
token-refresher. ==> connected-account.resolver => message-importer (as
we trigger full sync job when we connect an account) => token-refresher
(as we refresh token on message import).
connected-account.resolver and token-refresher both in connected-account
folder but should be in different modules. Otherwise it's a circular
dependency. It does not mean that we should create 1 module per service
as it was done before
In a nutshell: The code needs to be thought in term of reponsibilities
and in a way that enforce high level interfaces (and avoid circular
dependencies)
Bonus: As you can see, this code is also removing a lot of code because
of the removal of many .module.ts (also because I'm removing the sync
scripts v2 feature flag end removing old code)
Bonus: I have prefixed services name with Messaging to improve dev xp.
GmailErrorHandler could be different between MessagingGmailErrorHandler
and CalendarGmailErrorHandler for instance
Query read timeouts happen when a remote server is not available. It
breaks:
- the remote server show page
- the record table page of imported remote tables
This PR will catch the exception so it does not go to Sentry in both
cases.
Also did 2 renaming.
For remotes, we will only create the foreign key, without the relation
metadata. Expected behavior will be:
- possible to create an activity. But the remote object will not be
displayed in the relations of the activity
- the remote objects should not be available in the search for relations
Also switched the number settings to an enum, since we now have to
handle `BigInt` case.
---------
Co-authored-by: Thomas Trompette <thomast@twenty.com>