Adjust onboarding activation to trigger only for new users with
pictures. This prevents unnecessary activation steps for other user
types, streamlining the flow.
# Introduction
Running `storybook` with `configuration=pages` outputs a coverage that
is not replicated afterwards by the `storybook:coverage` command.
```sh
npx nx storybook:serve-and-test:static twenty-front --configuration=pages --shard=1/1 --checkCoverage=true
# ...
[TEST] Test Suites: 40 passed, 40 total
[TEST] Tests: 52 passed, 52 total
[TEST] Snapshots: 0 total
[TEST] Time: 84.786 s
[TEST] Ran all test suites.
[TEST] Coverage file (13067196 bytes) written to .nyc_output/coverage.json
[TEST] > nx storybook:coverage twenty-front --coverageDir=coverage/storybook --checkCoverage=true
[TEST]
[TEST]
[TEST] > nx run twenty-front:"storybook:coverage" --coverageDir=coverage/storybook --checkCoverage=true
[TEST]
[TEST] > npx nyc report --reporter=lcov --reporter=text-summary -t coverage/storybook --report-dir coverage/storybook --check-coverage=true --cwd=packages/twenty-front
[TEST]
[TEST]
[TEST] =============================== Coverage summary ===============================
[TEST] Statements : 70.45% ( 775/1100 )
[TEST] Branches : 45.39% ( 197/434 )
[TEST] Functions : 63.52% ( 209/329 )
[TEST] Lines : 71.28% ( 767/1076 )
[TEST] ================================================================================
[TEST]
```
```sh
> npx nyc report --reporter=lcov --reporter=text-summary -t coverage/storybook --report-dir coverage/storybook --check-coverage=true --cwd=packages/twenty-front
=============================== Coverage summary ===============================
Statements : 37.4% ( 9326/24931 )
Branches : 22.99% ( 2314/10063 )
Functions : 28.27% ( 2189/7741 )
Lines : 37.81% ( 9261/24488 )
================================================================================
ERROR: Coverage for lines (37.81%) does not meet global threshold (39%)
ERROR: Coverage for branches (22.99%) does not meet global threshold (23%)
ERROR: Coverage for statements (37.4%) does not meet global threshold (39%)
Warning: command "npx nyc report --reporter=lcov --reporter=text-summary -t coverage/storybook --report-dir coverage/storybook --check-coverage=true --cwd=packages/twenty-front" exited with non-zero status code
```
## Fix
Persist configuration scope arg to the `check-coverage` command
## Question
Should we add a step in the `ci-front` what would merge all
`performance,modules,pages` coverage and calculate the `global` coverage
? => I think that this has no plus value as we still compute each of
them individualy
# Introduction
It seems like I've just oversight adding it back while debugging
# Centralization
Please note that we only have access to the following context within a
job matrix properties:
```
Available expression contexts: github, inputs, vars, needs
```
We could centralize the `storybook_scope` as a job `outputs` in order to
avoid this to re-occurs.
Refers #8128
Changes Introduced:
- Added i18n configuration.
- Added a feature flag for localization.
- Enabled language switching based on the flag.
---------
Co-authored-by: Félix Malfait <felix@twenty.com>
Before the `recordIndexId` was the name plural. This caused problems
because the component states were the same for every view of an object.
When we switched from one view to another, some states weren't reset.
This PR fixes this by:
- Creating an effect at the same level of page change effect to set the
`currentViewId` inside the object `contextStore`
- Adding the `currentViewId` to the `recordIndexId`
Follow ups:
- We need to get rid of
`packages/twenty-front/src/modules/views/states/currentViewIdComponentState.ts`
and use the context store instead
Ensure early return in `hasUserAccessToWorkspaceOrThrow` for existing
users during sign-in. This prevents further unnecessary execution when
access validation is complete.
Removed unused UserService dependency and simplified authCallback
function by using destructured parameters. Added checks for user email
and integrated invitation lookup and access validation for enhanced SSO
sign-in/up flow.
We were using metadata client by legacy. Architecture is not great for
Core engine: workflows are available both in data and metadata client.
It makes more sense to use the data client since workflows are part of
standard objects
In this PR:
- migrate WorkspaceActivationStatus to twenty-shared (and update case to
make FE and BE consistent)
- introduce isWorkspaceActiveOrSuspended in twenty-shared
- refactor the code to use it (when we fetch data on the FE, we want to
keep SUSPENDED workspace working + when we sync workspaces we want it
too)
# Introduction
Applying the most present `yaml` file extension to all
`.github/workflows/` files
## Notes
Regarding the `restore-cache` and `save-cache` composite actions github
agnostically searches for any `DockerFile` `action.yml` and
`action.yaml` within the target folder when invoking composite actions
as follows:
```yml
- name: Restore storybook build cache
if: steps.changed-files.outputs.any_changed == 'true'
uses: ./.github/workflows/actions/restore-cache
with:
key: ${{ env.STORYBOOK_BUILD_CACHE_KEY }}
```
### Context
- Update /plan-required page to let users get free trial without credit
card plan
- Update usePageChangeEffectNavigateLocation to redirect paused and
canceled subscription (suspended workspace) to /settings/billing page
### To do
- [x] Update usePageChangeEffectNavigateLocation test
- [x] Update ChooseYourPlan sb test
closes#9520
---------
Co-authored-by: etiennejouan <jouan.etienne@gmail.com>
After hitting use as draft, we redirect the user to the workflow page.
If the user already accessed that page, the workflow and its current
version will be stored in cache. So we also need to update that cache.
I tried to update it manually but it was more complex than expected.
Steps and trigger are not fully defined objects.
I ended with a simple refetch query that I wanted to avoid but that is
at least fully working with minimum logic.
Renamed `user` to `payload` for better context clarity and updated
related references. Adjusted the login token generation to use
`workspace.id`, improving readability and maintainability of the code.
Closestwentyhq/twenty#8240
This PR introduces email verification for non-Microsoft/Google Emails:
## Email Verification SignInUp Flow:
https://github.com/user-attachments/assets/740e9714-5413-4fd8-b02e-ace728ea47ef
The email verification link is sent as part of the
`SignInUpStep.EmailVerification`. The email verification token
validation is handled on a separate page (`AppPath.VerifyEmail`). A
verification email resend can be triggered from both pages.
## Email Verification Flow Screenshots (In Order):



## Sent Email Details (Subject & Template):


### Successful Email Verification Redirect:

### Unsuccessful Email Verification (invalid token, invalid email, token
expired, user does not exist, etc.):

### Force Sign In When Email Not Verified:

# TODOs:
## Sign Up Process
- [x] Introduce server-level environment variable
IS_EMAIL_VERIFICATION_REQUIRED (defaults to false)
- [x] Ensure users joining an existing workspace through an invite are
not required to validate their email
- [x] Generate an email verification token
- [x] Store the token in appToken
- [x] Send email containing the verification link
- [x] Create new email template for email verification
- [x] Create a frontend page to handle verification requests
## Sign In Process
- [x] After verifying user credentials, check if user's email is
verified and prompt to to verify
- [x] Show an option to resend the verification email
## Database
- [x] Rename the `emailVerified` colum on `user` to to `isEmailVerified`
for consistency
## During Deployment
- [x] Run a script/sql query to set `isEmailVerified` to `true` for all
users with a Google/Microsoft email and all users that show an
indication of a valid subscription (e.g. linked credit card)
- I have created a draft migration file below that shows one possible
approach to implementing this change:
```typescript
import { MigrationInterface, QueryRunner } from 'typeorm';
export class UpdateEmailVerifiedForActiveUsers1733318043628
implements MigrationInterface
{
name = 'UpdateEmailVerifiedForActiveUsers1733318043628';
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`
CREATE TABLE core."user_email_verified_backup" AS
SELECT id, email, "isEmailVerified"
FROM core."user"
WHERE "deletedAt" IS NULL;
`);
await queryRunner.query(`
-- Update isEmailVerified for users who have been part of workspaces with active subscriptions
UPDATE core."user" u
SET "isEmailVerified" = true
WHERE EXISTS (
-- Check if user has been part of a workspace through userWorkspace table
SELECT 1
FROM core."userWorkspace" uw
JOIN core."workspace" w ON uw."workspaceId" = w.id
WHERE uw."userId" = u.id
-- Check for valid subscription indicators
AND (
w."activationStatus" = 'ACTIVE'
-- Add any other subscription-related conditions here
)
)
AND u."deletedAt" IS NULL;
`);
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`
UPDATE core."user" u
SET "isEmailVerified" = b."isEmailVerified"
FROM core."user_email_verified_backup" b
WHERE u.id = b.id;
`);
await queryRunner.query(`DROP TABLE core."user_email_verified_backup";`);
}
}
```
---------
Co-authored-by: Antoine Moreaux <moreaux.antoine@gmail.com>
Co-authored-by: Félix Malfait <felix@twenty.com>
Introduce unit tests to validate the behavior of the useSignInUpForm
hook. Tests cover default initialization, handling of developer
defaults, and prefilled values based on state.
# Introduction
Unless I'm mistaken in the existing `yarn-install` composite action
`dependencies` are cached twice.
## Cached through `setup-node`
As we're providing `yarn` value to cache inputs.
Setup node will cache the global `.yarn`
```yml
- name: Setup Node.js and get yarn cache
uses: actions/setup-node@v3
with:
node-version: 18
cache: yarn
```
## Programmatic `node_modules` caching
We even manually still cache `node_modules` folder using:
```yml
- name: Cache node modules
id: node-modules-cache
uses: actions/cache@v3
with:
path: |
node_modules
packages/*/node_modules
key: root-node_modules-${{ hashFiles('yarn.lock') }}
restore-keys: root-node_modules-
```
By the way it seems like that the `yarn.lock` file is useless as the
restore-key pattern omits it
This could, unless I'm mistaken, leads to invalid cache restore
## Runtime
The current infra results in the following install deps logs:
```sh
Prepare all required actions
Getting action download info
Download action repository 'actions/setup-node@v3' (SHA:1a444[2](https://github.com/twentyhq/twenty/actions/runs/12770942233/job/35596946506#step:6:2)cacd436585916779262731d5b162bc6ec7)
Download action repository 'actions/cache@v[3](https://github.com/twentyhq/twenty/actions/runs/12770942233/job/35596946506#step:6:3)' (SHA:f4b3439a656ba812b8cb417d2d49f9c810103092)
Run ./.github/workflows/actions/yarn-install
Run actions/setup-node@v3
Found in cache @ /opt/hostedtoolcache/node/18.20.5/x6[4](https://github.com/twentyhq/twenty/actions/runs/12770942233/job/35596946506#step:6:4)
Environment details
/usr/local/bin/yarn --version
4.4.0
/usr/local/bin/yarn config get cacheFolder
/home/runner/.yarn/berry/cache
/usr/local/bin/yarn config get enableGlobalCache
true
Cache Size: ~421 MB (441369819 B)
/usr/bin/tar -xf /home/runner/work/_temp/883eae34-9b21-4684-96[5](https://github.com/twentyhq/twenty/actions/runs/12770942233/job/35596946506#step:6:5)8-cdb937f62965/cache.tzst -P -C /home/runner/work/twenty/twenty --use-compress-program unzstd
Cache restored successfully
Cache restored from key: node-cache-Linux-yarn-a8c18b6600f1bc685e60192f39a0a0c5c51b918829090129d3787302789547fc
Run actions/cache@v3
Cache Size: ~506 MB (530305961 B)
/usr/bin/tar -xf /home/runner/work/_temp/becd3767-ac80-4ca9-8d[21](https://github.com/twentyhq/twenty/actions/runs/12770942233/job/35596946506#step:6:23)-93fa61e3070c/cache.tzst -P -C /home/runner/work/twenty/twenty --use-compress-program unzstd
Cache restored successfully
Cache restored from key: root-node_modules-a8c18b6600f1bc685e60192f39a0a0c5c51b918829090129d378730[27](https://github.com/twentyhq/twenty/actions/runs/12770942233/job/35596946506#step:6:30)89547fc
```
Where we can see in details that a cache is hit twice.
```
Cache Size: ~421 MB (441369819 B)
Cache Size: ~506 MB (530305961 B)
```
## Suggestion
We should stick to only one deps caching mechanism.
Also caching the node_modules folder is not
[recommended](611465405c/examples.md (node---npm)).
That's why I would factorize our caching to the `setup-node` scope only.
## The cache-primary-key crafting
Within the cache-primary-key we will inject the following metrics:
- Node version, as we're caching `node_modules` directly in this way to
avoid cache desync in case we upgrade node.
- Hash of the lockfile, in this way if the lockfile comes to mutate we
will re-install the deps from scratch, in this way no need to
programmatically listen for `changed-files` on the `package.json` and
the `lockfile`
- Strict installation with `check-cache` and `enableHardenedMode` to
true and obviously the `--immutable` flag ( note adding the
`--immutable-cache` by principle even if on paper is overkill as a cache
restoration and install should never occurs )