## Context
Our Flexible Schema engine dynamically generates entities/tables/APIs
for us but was not flexible enough to build indexes in the DB. With more
and more features involving heavy queries such as Messaging, we are now
adding a new WorkspaceIndex() decorator for our standard objects (will
come later for custom objects). This decorator will give enough
information to the workspace sync metadata manager to generate the
proper migrations that will create or drop indexes on demand.
To be aligned with the rest of the engine, we are adding 2 new tables:
IndexMetadata and IndexFieldMetadata, that will store the info of our
indexes.
## Implementation
```typescript
@WorkspaceEntity({
standardId: STANDARD_OBJECT_IDS.person,
namePlural: 'people',
labelSingular: 'Person',
labelPlural: 'People',
description: 'A person',
icon: 'IconUser',
})
export class PersonWorkspaceEntity extends BaseWorkspaceEntity {
@WorkspaceField({
standardId: PERSON_STANDARD_FIELD_IDS.email,
type: FieldMetadataType.EMAIL,
label: 'Email',
description: 'Contact’s Email',
icon: 'IconMail',
})
@WorkspaceIndex()
email: string;
```
By simply adding the WorkspaceIndex decorator, sync-metadata command
will create a new index for that column.
We can also add composite indexes, note that the order is important for
PSQL.
```typescript
@WorkspaceEntity({
standardId: STANDARD_OBJECT_IDS.person,
namePlural: 'people',
labelSingular: 'Person',
labelPlural: 'People',
description: 'A person',
icon: 'IconUser',
})
@WorkspaceIndex(['phone', 'email'])
export class PersonWorkspaceEntity extends BaseWorkspaceEntity {
```
Currently composite fields and relation fields are not handled by
@WorkspaceIndex() and you will need to use this notation instead
```typescript
@WorkspaceIndex(['companyId', 'nameFirstName'])
export class PersonWorkspaceEntity extends BaseWorkspaceEntity {
```
<img width="700" alt="Screenshot 2024-06-21 at 15 15 45"
src="https://github.com/twentyhq/twenty/assets/1834158/ac6da1d9-d315-40a4-9ba6-6ab9ae4709d4">
Next step: We might need to implement more complex index expressions,
this is why we have an expression column in IndexMetadata.
What I had in mind for the decorator, still open to discussion
```typescript
@WorkspaceIndex(['nameFirstName', 'nameLastName'], { expression: "$1 || ' ' || $2"})
export class PersonWorkspaceEntity extends BaseWorkspaceEntity {
```
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
### Overview
This PR introduces significant enhancements to the MessageQueue module
by integrating `@Processor`, `@Process`, and `@InjectMessageQueue`
decorators. These changes streamline the process of defining and
managing queue processors and job handlers, and also allow for
request-scoped handlers, improving compatibility with services that rely
on scoped providers like TwentyORM repositories.
### Key Features
1. **Decorator-based Job Handling**: Use `@Processor` and `@Process`
decorators to define job handlers declaratively.
2. **Request Scope Support**: Job handlers can be scoped per request,
enhancing integration with request-scoped services.
### Usage
#### Defining Processors and Job Handlers
The `@Processor` decorator is used to define a class that processes jobs
for a specific queue. The `@Process` decorator is applied to methods
within this class to define specific job handlers.
##### Example 1: Specific Job Handlers
```typescript
import { Processor, Process, InjectMessageQueue } from 'src/engine/integrations/message-queue';
@Processor('taskQueue')
export class TaskProcessor {
@Process('taskA')
async handleTaskA(job: { id: string, data: any }) {
console.log(`Handling task A with data:`, job.data);
// Logic for task A
}
@Process('taskB')
async handleTaskB(job: { id: string, data: any }) {
console.log(`Handling task B with data:`, job.data);
// Logic for task B
}
}
```
In the example above, `TaskProcessor` is responsible for processing jobs
in the `taskQueue`. The `handleTaskA` method will only be called for
jobs with the name `taskA`, while `handleTaskB` will be called for
`taskB` jobs.
##### Example 2: General Job Handler
```typescript
import { Processor, Process, InjectMessageQueue } from 'src/engine/integrations/message-queue';
@Processor('generalQueue')
export class GeneralProcessor {
@Process()
async handleAnyJob(job: { id: string, name: string, data: any }) {
console.log(`Handling job ${job.name} with data:`, job.data);
// Logic for any job
}
}
```
In this example, `GeneralProcessor` handles all jobs in the
`generalQueue`, regardless of the job name. The `handleAnyJob` method
will be invoked for every job added to the `generalQueue`.
#### Adding Jobs to a Queue
You can use the `@InjectMessageQueue` decorator to inject a queue into a
service and add jobs to it.
##### Example:
```typescript
import { Injectable } from '@nestjs/common';
import { InjectMessageQueue, MessageQueue } from 'src/engine/integrations/message-queue';
@Injectable()
export class TaskService {
constructor(
@InjectMessageQueue('taskQueue') private readonly taskQueue: MessageQueue,
) {}
async addTaskA(data: any) {
await this.taskQueue.add('taskA', data);
}
async addTaskB(data: any) {
await this.taskQueue.add('taskB', data);
}
}
```
In this example, `TaskService` adds jobs to the `taskQueue`. The
`addTaskA` and `addTaskB` methods add jobs named `taskA` and `taskB`,
respectively, to the queue.
#### Using Scoped Job Handlers
To utilize request-scoped job handlers, specify the scope in the
`@Processor` decorator. This is particularly useful for services that
use scoped repositories like those in TwentyORM.
##### Example:
```typescript
import { Processor, Process, InjectMessageQueue, Scope } from 'src/engine/integrations/message-queue';
@Processor({ name: 'scopedQueue', scope: Scope.REQUEST })
export class ScopedTaskProcessor {
@Process('scopedTask')
async handleScopedTask(job: { id: string, data: any }) {
console.log(`Handling scoped task with data:`, job.data);
// Logic for scoped task, which might use request-scoped services
}
}
```
Here, the `ScopedTaskProcessor` is associated with `scopedQueue` and
operates with request scope. This setup is essential when the job
handler relies on services that need to be instantiated per request,
such as scoped repositories.
### Migration Notes
- **Decorators**: Refactor job handlers to use `@Processor` and
`@Process` decorators.
- **Request Scope**: Utilize the scope option in `@Processor` if your
job handlers depend on request-scoped services.
Fix#5628
---------
Co-authored-by: Weiko <corentin@twenty.com>
First step for creating credentials for database proxy.
In next PRs:
- When calling endpoint, create database Postgres on proxy server
- Setup user on database using postgresCredentials
- Build remote server on DB to access workspace data
- Rename syncSubStatus to syncStage
- Rename ongoingSyncStartedAt to syncStageStartedAt
- Remove throttlePauseUntil from db and compute it with
syncStageStartedAt and throttleFailureCount
In this PR, I'm refactoring the messaging module into smaller pieces
that have **ONE** responsibility: import messages, clean messages,
handle message participant creation, instead of having ~30 modules (1
per service, jobs, cron, ...). This is mandatory to start introducing
drivers (gmails, office365, ...) IMO. It is too difficult to enforce
common interfaces as we have too many interfaces (30 modules...). All
modules should not be exposed
Right now, we have services that are almost functions:
do-that-and-this.service.ts / do-that-and-this.module.ts
I believe we should have something more organized at a high level and it
does not matter that much if we have a bit of code duplicates.
Note that the proposal is not fully implemented in the current PR that
has only focused on messaging folder (biggest part)
Here is the high level proposal:
- connected-account: token-refresher
- blocklist
- messaging: message-importer, message-cleaner, message-participants,
... (right now I'm keeping a big messaging-common but this will
disappear see below)
- calendar: calendar-importer, calendar-cleaner, ...
Consequences:
1) It's OK to re-implement several times some things. Example:
- error handling in connected-account, messaging, and calendar instead
of trying to unify. They are actually different error handling. The only
things that might be in common is the GmailError => CommonError parsing
and I'm not even sure it makes a lot of sense as these 3 apis might have
different format actually
- auto-creation. Calendar and Messaging could actually have different
rules
2) **We should not have circular dependencies:**
- I believe this was the reason why we had so many modules, to be able
to cherry pick the one we wanted to avoid circular deps. This is not the
right approach IMO, we need architect the whole messaging by defining
high level blocks that won't have circular dependencies by design. If we
encounter one, we should rethink and break the block in a way that makes
sense.
- ex: connected-account.resolver is not in the same module as
token-refresher. ==> connected-account.resolver => message-importer (as
we trigger full sync job when we connect an account) => token-refresher
(as we refresh token on message import).
connected-account.resolver and token-refresher both in connected-account
folder but should be in different modules. Otherwise it's a circular
dependency. It does not mean that we should create 1 module per service
as it was done before
In a nutshell: The code needs to be thought in term of reponsibilities
and in a way that enforce high level interfaces (and avoid circular
dependencies)
Bonus: As you can see, this code is also removing a lot of code because
of the removal of many .module.ts (also because I'm removing the sync
scripts v2 feature flag end removing old code)
Bonus: I have prefixed services name with Messaging to improve dev xp.
GmailErrorHandler could be different between MessagingGmailErrorHandler
and CalendarGmailErrorHandler for instance
Query read timeouts happen when a remote server is not available. It
breaks:
- the remote server show page
- the record table page of imported remote tables
This PR will catch the exception so it does not go to Sentry in both
cases.
Also did 2 renaming.
Closes#5069 back-end part
And:
- do not display schemaPendingUpdates status on remote server lists as
this call will become too costly if there are dozens of servers
- (refacto) create foreignTableService
After this is merged we will be able to delete remoteTable's
availableTables column
Adding stripe integration by making the server logic independent of the
input fields:
- query factories (remote server, foreign data wrapper, foreign table)
to loop on fields and values without hardcoding the names of the fields
- adding stripe input and type
- add the logic to handle static schema. Simply creating a big object to
store into the server
Additional work:
- rename username field to user. This is the input intended for postgres
user mapping and we now need a matching by name
---------
Co-authored-by: Thomas Trompette <thomast@twenty.com>
Now that we have persistent cache for schemas, we want to be able to
reset its state when users run the database:reset db otherwise schemas
won't be synced with the new DB state.
Note: In an upcoming PR, we want to be able to invalidate the cache on a
workspace level when we change the metadata schema through twenty
version upgrade
We should not depend on the foreign data wrapper type to manage distant
table. The remote server should be enough to handle the table creation.
Here is the new flow to fetch available tables:
- check if the remote server have available tables already stored
- if not, import full schema in a temporary schema
- copy the tables into the available tables field
- delete the schema
Left todo:
- update remote server input for postgres so we receive the schema
---------
Co-authored-by: Thomas Trompette <thomast@twenty.com>
## Context
We recently enabled the option to bypass SSL certificate authority
validation when establishing a connection to PostgreSQL. Previously, if
this validation failed, the server would revert to unencrypted traffic.
Now, it maintains encryption even if the SSL certificate check fails. In
the process, we overlooked a few DataSource setups, prompting a review
of DataSource creation within our code.
## Current State
Our DataSource initialization is distributed as follows:
- **Database folder**: Contains 'core', 'metadata', and 'raw'
DataSources. The 'core' and 'metadata' DataSources manage migrations and
static resolver calls to the database. The 'raw' DataSource is utilized
in scripts and commands that require handling both aspects.
- **typeorm.service.ts script**: These DataSources facilitate
multi-schema connections.
## Vision for Discussion
- **SystemSchema (formerly core) DataSource**: Manages system schema
migrations and system resolvers/repos. The 'core' schema will be renamed
to 'system' as the Core API will include parts of the system and
workspace schemas.
- **MetadataSchema DataSource**: Handles metadata schema migrations and
metadata API resolvers/repos.
- **(Dynamic) WorkspaceSchema DataSource**: Will be used in the Twenty
ORM to access a specific workspace schema.
We currently do not support cross-schema joins, so maintaining these
DataSources separately should be feasible. Core API resolvers will
select the appropriate DataSource based on the field context.
- **To be discussed**: The potential need for an AdminDataSource (akin
to 'Raw'), which would be used in commands, setup scripts, and the admin
panel to connect to any database schema without loading any model. This
DataSource should be reserved for cases where utilizing metadata,
system, or workspace entities is impractical.
## In This PR
- Ensuring all existing DataSources are compliant with the SSL update.
- Introducing RawDataSource to eliminate the need for declaring new
DataSource() instances in commands.
New strategy:
- add settings field on FieldMetadata. Contains a boolean isIdField and
for numbers, a precision
- if idField, the graphql scalar returned will be a GraphQL id. This
will allow the app to work even for ids that are not uuid
- remove globals dateScalar and numberScalar modes. These were not used
- set limit as Integer
- check manually in query runner mutations that we send a valid id
Todo left:
- remove WorkspaceBuildSchemaOptions since this is not used anymore.
Will do in another PR
---------
Co-authored-by: Thomas Trompette <thomast@twenty.com>
Co-authored-by: Weiko <corentin@twenty.com>
We will require remote table entity to map distant table name and local
foreign table name.
Introducing the entity:
- new source of truth to know if a table is sync or not
- created synchronously at the same time as metadata and foreign table
Adding a few more changes:
- exception rather than errors so the user can see these
- `pluralize` library that will allow to stop adding `Remote` suffix on
names
---------
Co-authored-by: Thomas Trompette <thomast@twenty.com>
Added isAuditLogged column to object-metadata-entity.ts
This is my first open source pull request. Please do let me know if made
any mistake. I will be greatfull. Thank u
---------
Co-authored-by: Félix Malfait <felix@twenty.com>
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
This PR is dropping the column `targetColumnMap` of fieldMetadata
entities.
The goal of this column was to properly map field to their respecting
column in the table.
We decide to drop it and instead compute the column name on the fly when
we need it, as it's more easier to support.
Some parts of the code has been refactored to try making implementation
of composite type more easier to understand and maintain.
Fix#3760
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
## Context
We are now removing Messaging V2 feature flag to use it everywhere.
## Implementation
- renaming FetchWorkspaceMessagesCommandsModule to
MessagingCommandModule to make it more generic since it it hosts all
commands related to the messaging module
- creating a crons folder inside commands and jobs crons should be named
with xxx.cron.command.ts instead of xxx.command.ts. Same for jobs, jobs
should be named with xxx.cron.job.ts. In a future PR we should make sure
those CronJobs implement a CronJob interface since it's a bit different
(a CronJob does not contain a payload compared to a Job)
- Cron commands have been renamed to "cron:$module:command" so
`fetch-all-workspaces-messages-from-cache:cron:start` has been renamed
to `cron:messaging:gmail-fetch-messages-from-cache`. Also having to
create a command to stop the cron is a bit painful to maintain so I
removed them for now, this can be easily done manually with pg-boss or
bull-mq
- Removing full-sync and partial-sync commands as they were there for
testing only, we might put them back at some point but we will have to
adapt the code anyway.
- Feature flag has been removed from the MessageChannel standard object
to make sure those new columns are created during the next sync-metadata
## Context
A new ADDRESS field type has been introduced and the company object has
been updated to use this new type however this introduced a few
regressions.
The good strategy would be to introduce a new field and rename the old
one.
This PR revert that change to fix the issue.
* update calendarEvent labels and description to match Figma
* modify conferenceUri to conferenceLink with LINK type
* update format-google-calendar-event.util to match new conferenceLink
* update CalendarEventDetails since overriding the fields is no longer needed
* fix mock metadata
* generate new uuid for field conferenceLink
* Build remote server
* Add getters
* Migrate to json inputs
* Use extendable type
* Use regex validation
* Remove acronymes
---------
Co-authored-by: Thomas Trompette <thomast@twenty.com>