When user is deleting its account on a specific workspace, we remove it
as if it was a workspaceMember, and if no workspaceMember remains, we
delete the workspace and the associated stripe subscription
## Context
Yoga can catch its own errors and we don't want to convert them again.
Moreover those errors don't have an "originalError" property and should
be schema related only (400 validation) so we only want to send them
back to the API caller without going through the exception handler.
Also fixed an issue in the createMany which was throwing a 500 when id
was missing from the creation payload. It seems the FE is always sending
an ID but it should actually be optional since the DB can generate one.
This is a regression from the new UUID validation introduced a few weeks
ago.
## Context
JobsModule is hard to maintain because we provide all the jobs there,
including their dependencies. This PR aims to split jobs in dedicated
modules.
Now that we have persistent cache for schemas, we want to be able to
reset its state when users run the database:reset db otherwise schemas
won't be synced with the new DB state.
Note: In an upcoming PR, we want to be able to invalidate the cache on a
workspace level when we change the metadata schema through twenty
version upgrade
In this PR I'm introducing a simple custom graphql-yoga plugin to create
a caching mechanism specific to our metadata.
The cache key is made of : workspace id + workspace cache version, with
this the cache is automatically invalidated each time a change is made
on the workspace metadata.
## Description
This PR adds recaptcha on login form. One can add any one of three
recaptcha vendor -
1. Google Recaptcha -
https://developers.google.com/recaptcha/docs/v3#programmatically_invoke_the_challenge
2. HCaptcha -
https://docs.hcaptcha.com/invisible#programmatically-invoke-the-challenge
3. Turnstile -
https://developers.cloudflare.com/turnstile/get-started/client-side-rendering/#execution-modes
### Issue
- #3546
### Environment variables -
1. `CAPTCHA_DRIVER` - `google-recaptcha` | `hcaptcha` | `turnstile`
2. `CAPTCHA_SITE_KEY` - site key
3. `CAPTCHA_SECRET_KEY` - secret key
### Engineering choices
1. If some of the above env variable provided, then, backend generates
an error -
<img width="990" alt="image"
src="https://github.com/twentyhq/twenty/assets/60139930/9fb00fab-9261-4ff3-b23e-2c2e06f1bf89">
Please note that login/signup form will keep working as expected.
2. I'm using a Captcha guard that intercepts the request. If
"captchaToken" is present in the body and all env is set, then, the
captcha token is verified by backend through the service.
3. One can use this guard on any resolver to protect it by the captcha.
4. On frontend, two hooks `useGenerateCaptchaToken` and
`useInsertCaptchaScript` is created. `useInsertCaptchaScript` adds the
respective captcha JS script on frontend. `useGenerateCaptchaToken`
returns a function that one can use to trigger captcha token generation
programatically. This allows one to generate token keeping recaptcha
invisible.
### Note
This PR contains some changes in unrelated files like indentation,
spacing, inverted comma etc. I ran "yarn nx fmt:fix twenty-front" and
"yarn nx lint twenty-front -- --fix".
### Screenshots
<img width="869" alt="image"
src="https://github.com/twentyhq/twenty/assets/60139930/a75f5677-9b66-47f7-9730-4ec916073f8c">
---------
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
Co-authored-by: Charles Bochet <charles@twenty.com>
Previously we had to create a separate API key to give access to chrome
extension so we can make calls to the DB. This PR includes logic to
initiate a oauth flow with PKCE method which redirects to the
`Authorise` screen to give access to server tokens.
Implemented in this PR-
1. make `redirectUrl` a non-nullable parameter
2. Add `NODE_ENV` to environment variable service
3. new env variable `CHROME_EXTENSION_REDIRECT_URL` on server side
4. strict checks for redirectUrl
5. try catch blocks on utils db query methods
6. refactor Apollo Client to handle `unauthorized` condition
7. input field to enter server url (for self-hosting)
8. state to show user if its already connected
9. show error if oauth flow is cancelled by user
Follow up PR -
Renew token logic
---------
Co-authored-by: Félix Malfait <felix@twenty.com>
Refactored the code to introduce two different concepts:
- AuditLogs (immutable, raw data)
- TimelineActivities (user-friendly, transformed data)
Still some work needed:
- Add message, files, calendar events to timeline (~2 hours if done
naively)
- Refactor repository to try to abstract concept when we can (tbd, wait
for Twenty ORM)
- Introduce ability to display child timelines on parent timeline with
filtering (~2 days)
- Improve UI: add links to open note/task, improve diff display, etc
(half a day)
- Decide the path forward for Task vs Notes: either introduce a new
field type "Record Type" and start going into that direction ; or split
in two objects?
- Trigger updates when a field is changed (will be solved by real-time /
websockets: 2 weeks)
- Integrate behavioral events (1 day for POC, 1 week for
clean/documented)
<img width="1248" alt="Screenshot 2024-04-12 at 09 24 49"
src="https://github.com/twentyhq/twenty/assets/6399865/9428db1a-ab2b-492c-8b0b-d4d9a36e81fa">
## Context
- Rename remaining V2 services.
- Delete messages in DB when gmail history tells us they've been
deleted. I removed the logic where we store those in a cache since it's
a bit overkill because we don't need to query gmail and can use those
ids directly. The strategy is to delete the message channel message
association of the current channel, not the message or the thread since
they can still be linked to other channels. However, we will need to
call the threadCleaner service on the workspace to remove orphan
threads/non-associated messages.
Note: deletion for full-sync is a bit tricky because we need the full
list of message ids to compare with the DB and make sure we don't
over-delete. Currently, to keep memory, we don't have a variable that
holds all ids as we flush it after each page. Easier solution would be
to wipe everything before each full sync but it's probably not great for
the user experience if they are currently manipulating messages since
full-sync can happen without a user intervention (if a partial sync
fails due to historyId being invalidated by google for some reason)
## Context
With the addition of cronjobs, the app is building a lot of jobs and
stores them indefinitely. There is no real point to keep all of them in
the queue once they have been processed (completed or failed) so we are
adding a new default option to the bull-mq driver
## Implementation
See bull-mq JobsOption doc
```typescript
/**
* If true, removes the job when it successfully completes
* When given a number, it specifies the maximum amount of
* jobs to keep, or you can provide an object specifying max
* age and/or count to keep. It overrides whatever setting is used in the worker.
* Default behavior is to keep the job in the completed set.
*/
removeOnComplete?: boolean | number | KeepJobs;
/**
* If true, removes the job when it fails after all attempts.
* When given a number, it specifies the maximum amount of
* jobs to keep, or you can provide an object specifying max
* age and/or count to keep. It overrides whatever setting is used in the worker.
* Default behavior is to keep the job in the failed set.
*/
removeOnFail?: boolean | number | KeepJobs;
```
removeOnFail should be a bit higher since they are the ones we are most
likely looking at when needed.
This PR introduces a new folder structure for business modules.
Cron commands and jobs are now stored within the same module/folder at
the root of the business module
e.g: /modules/messaging/crons/commands instead of
/modules/messaging/commands/crons
Patterns are now inside their own cron-command files since they don't
need to be exported
Ideally cronJobs and cronCommands should have their logic within the
same class but it's a bit harder than expected due to how commanderjs
and our worker need both some class heritage check, hence the first
approach is to move them in the same folder
Also Messaging fullsync/partialsync V2 has been dropped since this is
the only used version => Breaking change for ongoing jobs and crons.
Jobs can be dropped but we will need to re-run our crons (only
cron:messaging:gmail-fetch-messages-from-cache)
In the previous PR #4912 it seems that I forgot to pass the environment
on the backend.
Here is a quick fix!
I also added some "doc" in the the .env.example
Add support for a new SENTRY_RELEASE and SENTRY_ENVIRONMENT env.
It is optional and allows to init sentry with a Release version and an
env (used internally at Twenty).
Docker image have been updated do intergrate the new env as an Argument
## Context
We are now removing Messaging V2 feature flag to use it everywhere.
## Implementation
- renaming FetchWorkspaceMessagesCommandsModule to
MessagingCommandModule to make it more generic since it it hosts all
commands related to the messaging module
- creating a crons folder inside commands and jobs crons should be named
with xxx.cron.command.ts instead of xxx.command.ts. Same for jobs, jobs
should be named with xxx.cron.job.ts. In a future PR we should make sure
those CronJobs implement a CronJob interface since it's a bit different
(a CronJob does not contain a payload compared to a Job)
- Cron commands have been renamed to "cron:$module:command" so
`fetch-all-workspaces-messages-from-cache:cron:start` has been renamed
to `cron:messaging:gmail-fetch-messages-from-cache`. Also having to
create a command to stop the cron is a bit painful to maintain so I
removed them for now, this can be easily done manually with pg-boss or
bull-mq
- Removing full-sync and partial-sync commands as they were there for
testing only, we might put them back at some point but we will have to
adapt the code anyway.
- Feature flag has been removed from the MessageChannel standard object
to make sure those new columns are created during the next sync-metadata
## Context
When running a command, the process should end normally however it stays
hanging due to the open connection with redis client (when
CACHE_STORAGE_TYPE=redis)
This PR adds the necessary logic to gracefully close the connection once
the module is destroyed. Thanks to that, the command process now
properly ends once executed.
* fixes
* saving workspaceMemberId and personId when saving attendees
* add typing
* use Map
* improve saveMessageParticipants
* fix role type
* move logic in a service
* create new service
* use new service in calendar-event-attendee.service
* modify service to include more common logic
* add defaumt value to isOrganizer in calendar-event-attendee.object-metadata
* rename folder
* renaming
* Being implementing events on the frontend
* Rename JSON to RAW JSON
* Fix handling of json field on frontend
* Log user id
* Add frontend tests
* Update packages/twenty-server/src/engine/api/graphql/workspace-query-runner/jobs/save-event-to-db.job.ts
Co-authored-by: Weiko <corentin@twenty.com>
* Move db calls to a dedicated repository
* Add server-side tests
---------
Co-authored-by: Weiko <corentin@twenty.com>
* Add getter factory for attachements
* Override guard in test
* Add secret in env variables
* Return custom message on expiration
* Rename to signPayload
---------
Co-authored-by: Thomas Trompette <thomast@twenty.com>
* rename database services to repository
* refactor more repositories
* more refactoring
* followup
* remove unused imports
* fix
* fix
* Fix calendar listener being called when flag is off
* remove folders