TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

936

こんにちは、KINTO テクノロジーズ (以下 KTC) DBRE のあわっち( @_awache ) です。 今回は AWS の提供するシークレットローテーションの機能を利用して、主に Aurora MySQL に登録されている データベースユーザーのパスワードを安全にローテーションする仕組みを導入したのでその導入方法やつまずいた点、さらに周辺で開発したものを全てまとめて紹介させていただきます。 かなり長いブログなので先に簡単に要約を記載させていただきます。 全体まとめ 背景 KTC ではデータベースユーザーのパスワードを一定期間でローテーションすることが義務付けられることとなった ソリューション 検討 MySQL Dual Password:MySQL 8.0.14以降で利用可能なDual Password機能を使い、プライマリとセカンダリのパスワードを設定 AWS Secrets Managerのローテーション機能:Secrets Managerを使い、パスワードの自動更新とセキュリティの強化を実現 採用 設定の容易さと網羅性のために、AWS Secrets Managerのローテーション機能を採用 プロジェクトの開始 プロジェクトの開始にあたり、インセプションデッキを作成し、コスト、セキュリティ、リソースに対する責任分解点を明確化した プロジェクト内で開発したもの Lambda 関数 AWS の提供するシークレットローテーションの仕組みを単純に使うだけでは KTC の要件に合わない部分が多くあったため、運用面を検討した結果、多くの Lambda 関数を作成する必要があった シングルユーザー戦略用 Lambda 関数 目的:単一のユーザーに対してパスワードをローテーション 設定:Secrets Manager に設定される。シークレットのローテーションを指定した時間に実行し、パスワードを更新する 交代ユーザー戦略用 Lambda 関数 目的:2つのユーザーを交互に更新し、高可用性を確保 設定:Secrets Manager に設定される。ローテーションの初回で2つ目のユーザー(クローン)を作成し、以降のローテーションでパスワードを切り替える シークレットローテーション結果通知用 Lambda 関数 目的:シークレットローテーションの結果を通知 トリガー:CloudTrail イベント(RotationStarted、RotationSucceeded、RotationFailed) 機能:DynamoDB にローテーション結果を保存し、Slack に通知。通知時に Slack のタイムスタンプを使用してスレッドに追記 ローテーション結果格納用 DynamoDB 管理 Lambda 関数 目的:ローテーションの結果を DynamoDB に格納し、エビデンスとしてセキュリティチームに提出 機能:CloudTrailのイベントをトリガーに Lambda を実行し、ローテーション結果を DynamoDB に保存。保存したデータを基に SLI 通知を行う SLI 通知用 Lambda 関数 目的:ローテーション状況を監視し、SLI 通知を行う 機能:DynamoDB から情報を取得し、シークレットローテーションの進行状況を監視。必要に応じて Slack に通知 ローテーションスケジュール決定のための Lambda 関数 目的:各 DBClusterID に対してローテーションの実行時間を決定 機能:既存のシークレットローテーション設定情報を基に、新しいスケジュールを生成し DynamoDB に保存。ローテーションウィンドウとウィンドウ期間を設定 ローテーション設定適用のための Lambda 関数 目的:決定したスケジュールを Secrets Manager に適用 機能:DynamoDB から取得した情報を基に、指定の時間でシークレットローテーションを設定 シークレットローテーション登録ツール 実際の登録にはローカルから実行できるツールを別途開発した Secrets Rotation スケジュール設定ツール 目的:データベースユーザーごとにシークレットローテーションのスケジュールを設定 機能:DynamoDB に保存された情報を基に、指定された DBClusterID と DBUser の組み合わせに対して、シークレットローテーションの設定を適用 最終的な全体アーキテクチャ もっとシンプルにできるかと思ったが想像以上に複雑に。。 ![全体像](/assets/blog/authors/_awache/20240812/secrets_rotation_overview.png =750x) 結果 シークレットローテーションの全プロセスを自動化し、セキュリティと管理の手間を削減 全体のアーキテクチャを構築し、ガバナンス制約を満たすシステムを実現 KTCは、シークレットローテーションを利用して、安全で効率的なデータベース運用を目指し、さらなる改善を続けていく 改めまして、ここから本編に入りたいと思います。 Introduction KTC ではデータベースユーザーのパスワードを 一定の短い期間でローテーション することが義務付けられることとなりました。ただパスワードのローテーションと言っても簡単に行えるものではありません。 データベースユーザーのパスワードを変更するためにはシステムを停止し、データベース側のパスワード変更を行った上でシステムの設定ファイル等を変更し、動作確認をする必要があります。ただデータベースユーザーのパスワード変更をするというだけにも関わらず、直接的な価値を提供しないサービス停止を伴うメンテナンス作業を行う必要があります。これをごく短い一定期間ごとに全てのサービスで実施するのは非常に煩わしいと思います。 今回はこの課題をどのように解決したのか、具体的な事例を含めて紹介させていただきます。 ソリューションの検討 今回は大きく2つのソリューションを検討しました。 MySQL Dual Password の機能を使用 Secrets Manager のローテーション機能 を活用する MySQL Dual Password MySQL 8.0.14 以降 MySQL では Dual Password 機能を利用することができます。この機能を利用することで、プライマリとセカンダリの二つのパスワードを設定し、システムやアプリケーションの停止時間なしにパスワードの変更を行うことが可能となります。 Dual Password 機能を使うための簡単な手順は下記の通りです。 ALTER USER 'user'@'host' IDENTIFIED BY 'new_password' RETAIN CURRENT PASSWORD; で新しいプライマリパスワードを設定し、現在のパスワードをセカンダリとして保持する 全てのアプリケーションが新しいパスワードで接続するように更新する ALTER USER 'user'@'host' DISCARD OLD PASSWORD; でセカンダリパスワードを破棄する Secrets Manager のローテーション機能 AWS Secrets Manager はシークレットの定期的な自動更新をサポートしています。シークレットローテーションを有効にすることで手動でのパスワード管理の負担を軽減できるだけでなく、セキュリティ強化にも大きく寄与できます。 シークレットローテーションを有効にするにはシークレットにローテーションポリシーを設定し、ローテーション用の Lambda 関数を指定する必要があります。 ![ローテーション設定画面](/assets/blog/authors/_awache/20240812/rotation_setting.png =750x) Lambda ローテーション関数 ローテーション関数を作成 AWS によって提供されたコードを自動デプロイすることが可能なので個別に Lambda 関数を作成せずともすぐに利用することができます アカウントからローテーション関数を使用 独自で作成した Lambda 関数を使用することができます。もしくは上記の「ローテーション関数を作成」で作成した関数を再利用したい場合にこちらを選択することが可能です ローテーション戦略 シングルユーザー 一つのユーザーに対してパスワードローテーションを行う方式です ローテーション中にデータベース接続は維持され、適切な再試行戦略によって認証情報の更新とデータベースへのアクセス拒否リスクを低減することが可能です 新しい接続はローテーション後に新しい認証情報(パスワード)を使用する必要があります 交代ユーザー この交代ユーザー戦略はマニュアルを見てもイメージが掴みづらいものでした、が頑張って言語化すると下記のような形になるかなと思います 1つのシークレット内で 2つのユーザーの認証情報(ユーザー名とパスワードの組み合わせ) を交互に更新し、最初のローテーションで 2つ目のユーザー(クローン)を作成し、以降のローテーションでパスワードを切り替える方式です データベースの高可用性を必要とするアプリケーションに適しており、ローテーション中もアプリケーションは引き続き有効な認証情報セットを取得可能です クローンユーザーが元のユーザーと同じアクセス権を持つため、権限変更時には両ユーザーの権限を同期させる必要があるので注意が必要となります イメージを載せてみます ローテーション前後の変化 ![ローテーション実行前後](/assets/blog/authors/_awache/20240812/rotation_exec.png =750x) 少しわかりづらいのですが、上記の図のようにパスワードローテーションが走るとユーザー名に「_clone」が付きます。 初回の場合は、データベース側にも既存のユーザーと同じ権限を持った別のユーザーが作成されます 2回目以降のローテーションではそれを使い回してパスワード更新をし続ける形になります ![交代ユーザー](/assets/blog/authors/_awache/20240812/multi_user_rotation.png =750x) ソリューションの決定 私たちは下記の理由から Secrets Manager のローテーション機能を使用することを決定しました。 設定の容易さ MySQL Dual Password パスワード変更用のスクリプトを準備した上で、変更された内容をアプリケーションに反映する必要がある Secrets Manager のローテーション機能 サービスが必ず Secrets Manager から接続情報を取得している前提であればプロダクト側は特にコード修正等は必要ない 網羅性 MySQL Dual Password MySQL 8.0.14 以降 (Aurora 3.0以降) にのみ対応 Secrets Manager のローテーション機能 KTC で扱っている全ての RDBMS に対応 Amazon Aurora Redshift データベース Password 以外にも対応 プロダクトで使用する API Key なども対応可能 プロジェクトの開始に向けて プロジェクトを開始するに当たって自分たちが何をして何をしないのか、の輪郭を掴むため私たちは最初にコスト、セキュリティ、リソースに対する責任分解点の明確化とやるべきことの設定、インセプションデッキを作成しています。 こちらを簡単に紹介させていただきます。 責任分解点 項目 プロダクト DBRE コスト • DB パスワード格納用の Secrets Manager のコスト • シークレットローテーションを行うための仕組みに関するコストは DBRE で負担する セキュリティ • この仕組みを使うプロダクトは必ず Secrets Manager から データベース接続情報を取得しなければならない • ローテーションが行われた後、次のローテーションが行われるまでにアプリケーションの再デプロイなどで Secrets Manager から接続情報を取得し直さなければならない • 会社で定められたガバナンス制約の基準内にローテーションが完了すること • シークレットローテーションの実績を必要に応じてセキュリティチームに提供できること • 履歴管理等の目的でパスワードを平文で保存しないこと • ローテーションに必要な仕組みのセキュリティが十分であること リソース • データベースに登録されたユーザーは必ず Secrets Manager で管理されていること • シークレットローテーションで実施されるリソースは必要最小限な状態で実行されること やるべきこと 会社で定められたガバナンス制約の基準内にシークレットローテーションが行われること シークレットローテーションの開始、終了、成功、失敗を検知し、それを担当プロダクトに通知すること シークレットローテーションが失敗した場合にプロダクトへの影響がない状態でリカバリを完了すること 同じ DB Cluster に登録されているユーザーに設定されるローテーションのタイミングは同じであること 会社で定められたガバナンス制約の基準にどれだけ則っているかがわかること インセプションデッキ (一部) 我々はなぜここにいるのか 我々は、会社のセキュリティポリシーに準拠し、データベースのパスワードを一定期間内に自動でローテーションするシステムを開発し、導入するためにここにいます。 この自動化プロセスは、セキュリティの強化、管理の手間の軽減、およびコンプライアンスの遵守を目的としています。 このプロジェクトは、DBREチームによって主導され、AWSのローテーション戦略を利用することで、安全かつ効率的なパスワード管理を実現します。 エレベーターピッチ セキュリティ違反のリスクを軽減し、コンプライアンス要件を維持したい プロダクト担当者およびセキュリティグループ向けの、 シークレットローテーションというサービスは、 データベースパスワード管理ツールです。 これは自動化されたセキュリティ強化と管理の手間を削減する機能があり、 MySQL の Dual Password とは違って、 AWS の提供するすべての RDBMS に適用する機能が備わっています。 そしてAWSのサービスを利用する企業だからこそ、最新のクラウド技術を駆使し、柔軟かつスケーラブルなセキュリティ対策を提供し、企業のデータ保護基準に応えることができます。 PoC PoC を行うため、自分たちの検証環境にシークレットローテーションに必要なリソース(DB Cluster / Secrets) を作成し、コンソールからローテーションの仕組みを実施したところすんなりと実用に適しているということが見て取れたのでこれはすぐに提供できる、と大きな期待を持てました。 ただ。。この時の私は知らなかったのです、この後に起こる困難 (悲劇) を。。。 アーキテクチャ シークレットローテーションを提供するためにはこれだけでは不十分なので通知の仕組みをユーザーに提供する必要があります。この仕組みを載せたアーキテクチャを簡単に紹介させていただきます。 シークレットローテーションの全体像 ![全体アーキテクチャ](/assets/blog/authors/_awache/20240812/secrets_rotation_archtecture.png =750x) シークレットローテーションは各 Secrets Manager に登録されたシークレット毎に実行されます わかりやすいように 1ヶ月毎の更新を例にします この場合 1ヶ月に 1度ローテーション実行となることで最大 2ヶ月は同じパスワードを利用することが可能となります その間になんらかのリリースに伴うデプロイのし直しをするだけで気づいたら会社の定めるローテーションルールに乗っかっている状態を実現することができます ローテーションの結果を DynamoDB へ格納 シークレットローテーションではステータスを下記のタイミングで CloudTrail にイベントが書き込まれます 処理開始: RotationStarted 処理失敗: RotationFailed 処理終了: RotationSucceeded 他にもありますが詳しい情報は ローテーションのログエントリ をご確認ください これらのイベントをトリガーとして通知用の Lambda が実行されるように CloudWatch Event を設定します 下記は実際に利用している Terraform のコードの一部です cloudwatch_event_name = "${var.environment}-${var.sid}-cloudwatch-event" cloudwatch_event_description = "Secrets Manager Secrets Rotation. (For ${var.environment})" event_pattern = jsonencode({ "source" : ["aws.secretsmanager"], "$or" : [{ "detail-type" : ["AWS API Call via CloudTrail"] }, { "detail-type" : ["AWS Service Event via CloudTrail"] }], "detail" : { "eventSource" : ["secretsmanager.amazonaws.com"], "eventName" : [ "RotationStarted", "RotationFailed", "RotationSucceeded", "TestRotationStarted", "TestRotationSucceeded", "TestRotationFailed" ] } }) 格納されたローテーション結果はエビデンスとしてセキュリティチームに提出するという用途にも活用されます ここまでの部分を反映したアーキテクチャは下記のようになります。 ![シークレットローテーションのみのアーキテクチャ](/assets/blog/authors/_awache/20240812/secrets_rotation_archtecture2.png =750x) 機能提供に伴って準備が必要な主な AWS リソース 交代ユーザー戦略適用のための Lambda 関数 (MySQL 用と Redshift 用で別々の Lambda が必要) Secrets Manager に設定する交代ユーザー用 Lambda 関数 社内のインフラ構築ルールに準拠するため、Lambda 関数設定や IAM 等、AWS によって自動で生成される Lambda 関数では対応しきれない要素が多くあったため、自分たちで作成 シングルユーザー戦略適用のための Lambda 関数 (MySQL 用と Redshift 用で別々の Lambda が必要) Secrets Manager に設定するシングルユーザー用 Lambda 関数 管理者用ユーザー用のパスワードには交代ユーザー戦略の適用ができない シークレットローテーション結果通知用 Lambda 関数 シークレットローテーションによってローテーションされたことを通知する仕組みは自分たちで用意する必要がある CloudTrail に状態や結果が格納されますのでそれをトリガーとして Slack 通知する イベントトリガーで実行すると Lambda は別々に実行されることに注意 ローテーション結果格納用 DynamoDB ローテーション結果を DynamoDB に格納 通知をする際にどの Slack 通知の関連なのかを明確にし、Slack のスレッドに格納するため TimeStamp も同時に格納 シークレットローテーション用の Lambda 関数を自分たちで管理した理由 前提として私たちは AWS が提供している Lambda を活用しています。 上述したとおりAWS によって提供されたコードを自動デプロイすることが可能なので個別に Lambda 関数を作成せずともすぐに利用することができます。 ただ、私たちは一度コードセットを自分たちのリポジトリに commit した上で terraform で構築しています。 その主な理由は下記のとおりです。 KTC の AWS アカウントには複数のサービスが共存している 複数のサービスが同じ AWS アカウント上に共存していると IAM の権限が強くなりすぎてしまう また複数のリージョンにまたがってサービスを展開している Lambda はクロスリージョンで実行することができないため、同じコードを Terraform を活用し複数のリージョンにデプロイする必要がある シークレットローテーション設定対象のデータベースユーザーの数が多い DB Cluster 数: 200弱、DBユーザー数: 1000弱 全てのシークレットに手動で構築していくのは管理工数が非常に大きくなってしまう 社内ルールの適用 IAM だけでなく、Tag の設定が必須となる 個別で自動作成してしまうとその後 Tag を設定する、という作業が必要となる タイミングによって AWS 側で提供するコードがアップデートされる AWS が提供するコードなので当然これは発生し得ます 場合によってはそのアップデートによってトラブルが発生してしまうこともある可能性があります いくつか書きましたが簡単に言うと社内管理上自分たちでコード管理できた方が都合が良かった、と言うことになります。 Secrets Rotation 用の Lambda 関数を自分たちで管理する方法 ここは本当に大変でした。 最初は AWS から Lambda コードのサンプル が出ているので簡単にいくかと思ったのですがこれをベースにデプロイを行っても様々なエラーが発生してしまいました。自分たちの検証環境ではうまく行っても特定の DB Cluster でのみ発生するエラーなどもあり困難を極めました。 コンソールから自動生成したコードでは発生せず安定していたためこれをうまく活用できるようにする必要があります。 やり方としてはいくつかあるのですが、私たちが試した方法を共有します。 サンプルコードからデプロイする方法を模索する コードそのものは上述のリンクから確認することができます ただし、必要なモジュールをバージョンも含めて全て合わせるのは困難です、またこの Lambda コードは割と頻繁に更新されているのでそれに追従する必要があります これはちょっと大変だったので断念しました さらにこのコードを管理し続けるとなるならば自分たちで別の方法で内製した方がいい気がしました シークレットローテーション関数をコンソールから自動生成した上でその Lambda コードをダウンロードする 毎回コードを自動生成した上でそれをローカルにダウンロードし、自分たちの Lambda に適用する方法でそこまで難しくはありません ただし、自動生成するタイミングで既存で動いているコードとダウンロードしたコードが変わってしまう可能性があります これでも良かったのですが、コード変更を毎回最新化するために一度デプロイをしなければいけないのは自動化するためには少し億劫でした シークレットローテーション関数をコンソールから自動生成したときに裏側で実行される CloudFormation のテンプレートからそのデプロイ方法を確認する コンソールから自動生成すると裏側で AWS の用意した CloudFormation が走ります この時のテンプレートを確認すると AWS が自動生成するコードの S3 のパスを取得することができます。 S3 内にある Zip ファイルを直接取得することで毎回シークレットローテーションのコードを生成するプロセスを削減するメリットを考えると 3 の方法が最も効率的かなと考え今回はこちらを採用しました。 実際に S3 からダウンロードするスクリプトは下記のとおりです。 #!/bin/bash set -eu -o pipefail # Navigate to the script directory cd "$(dirname "$0")" source secrets_rotation.conf # Function to download and extract the Lambda function from S3 download_and_extract_lambda_function() { local s3_path="$1" local target_dir="../lambda-code/$2" local dist_dir="${target_dir}/dist" echo "Downloading ${s3_path} to ${target_dir}/lambda_function.zip..." # Remove existing lambda_function.zip and dist directory rm -f "${target_dir}/lambda_function.zip" rm -rf "${dist_dir}" if ! aws s3 cp "${s3_path}" "${target_dir}/lambda_function.zip"; then echo "Error: Failed to download file from S3." exit 1 fi echo "Download complete." echo "Extracting lambda_function.zip to ${dist_dir}..." mkdir -p "${dist_dir}" unzip -o "${target_dir}/lambda_function.zip" -d "${dist_dir}" cp -p "${target_dir}/lambda_function.zip" "${dist_dir}/lambda_function.zip" echo "Extraction complete." } # Create directories if they don't exist mkdir -p ../lambda-code/mysql-single-user mkdir -p ../lambda-code/mysql-multi-user mkdir -p ../lambda-code/redshift-single-user mkdir -p ../lambda-code/redshift-multi-user # Download and extract Lambda functions download_and_extract_lambda_function "${MYSQL_SINGLE_USER_S3_PATH}" "mysql-single-user" download_and_extract_lambda_function "${MYSQL_MULTI_USER_S3_PATH}" "mysql-multi-user" download_and_extract_lambda_function "${REDSHIFT_SINGLE_USER_S3_PATH}" "redshift-single-user" download_and_extract_lambda_function "${REDSHIFT_MULTI_USER_S3_PATH}" "redshift-multi-user" echo "Build complete." デプロイの際にこのスクリプトを流せばコードの最新化が可能となります。逆に言うとこのスクリプトを実行しなければコード自体はこれまで動いていたものを継続して使用し続けることができます。 シークレットローテーション結果通知用 Lambda 関数と DynamoDB シークレットローテーション結果通知は CloudTrail の PUT をトリガーとして実行されます。シークレットローテーションの Lambda に手を加えればもうちょっと簡単にできたかと思うのですがそれでは何のために AWS の提供しているコードを最大限利用しようとしていたのか分かりません。 開発する前の私は PUT トリガーで通知を行えばいいだけ、と簡単に考えていました。ただ、そんなに甘くはありませんでした。 ここで再度全体像を確認してみましょう。 ![全体アーキテクチャ](/assets/blog/authors/_awache/20240812/secrets_rotation_archtecture.png =750x) 通知としては下記のように開始時に Slack の通知用のスレッドを作り、終了時にはそのスレッドに追記する形で通知を行います。 ![Slack 通知](/assets/blog/authors/_awache/20240812/slack_notification.png =750x) 今回利用するイベントは下記のとおりです。 処理開始時のイベント Cloud Trail に PUT されるイベント: RotationStarted 処理終了時のイベント 処理成功時に Cloud Trail に PUT されるイベント: RotationSucceeded 処理失敗時に Cloud Trail に PUT されるイベント: RotationSucceeded 処理開始時のイベントである RotationStarted の際にはその Slack のタイムスタンプを DynamoDB に格納し、それを使うことでスレッドに追記することができます。 これを考慮すると DynamoDB がどの単位でユニークになるかを検討する必要があります。結果としては Secrets Manager の SecretID、そして次回のローテーション予定日を組み合わせることでユニークにすることとしました。 DynamoDB の構成の主要なカラム構成は下記のとおりです。 (実際にはもっと多く、様々な情報を入れています) SecretID: パーテションキー NextRotationDate: ソートキー 次回ローテーション予定日、describe で取得可能 SlackTS: RotationStarted のイベントの際、Slack で最初に送ったタイムスタンプ このタイムスタンプを利用することで Slack のスレッドに追記することができる VersionID: RotationStarted のイベントの際の SecretID のバージョン 万が一トラブルが発生した場合すぐに戻せるように一つ前のバージョンを保持しておくことでローテーション前のパスワード情報を復元することが可能 最も困った点は、シークレットローテーションの一回の処理の中で複数回 Cloud Trail に PUT するため、ステップごとに別々の Lambda が起動されることです。頭では理解していたものの、これは実際には非常に面倒でした。 そのため下記を考慮しなければなりませんでした。 シークレットローテーションの処理自体は非常に高速な処理 Cloud Trail に PUT されるタイミングが RotationStarted と RotationSucceeded (もしくはRotationFailed) でほぼ同じくらいなので通知用の Lambda の実行もほぼ同時に 2回流れることになる 通知用の Lambda では Slack 通知や DynamoDB への登録も行っているため、RotationStarted の処理が完了する前に処理終了時のイベントが流れてしまうことがある これが発生するとどのスレッドに送るべきなのかが定まらず新規で Slack に投稿されてしまう 解決の方法としてはシンプルにイベント名が RotationStarted でなかった場合、Slack に通知する処理を数秒待つ、ということで対応しました。 設定ミス等でシークレットローテーションが失敗してしまうことがあります。ほとんどの場合は DB のパスワードが更新される前にエラーとなるのでプロダクトにすぐに影響があるわけではありません。 その際には下記のコマンドでリカバリを実施します。 # VersionIdsToStages が AWSPENDING のバージョン ID を取得 $ aws secretsmanager describe-secret --secret-id ${secret_id} --region ${region} - - - - - - - - - - Versions 出力例 - - - - - - - - - - "Versions": [ { "VersionId": "7c9c0193-33c8-3bae-9vko-4129589p114bb", "VersionStages": [ "AWSCURRENT" ], "LastAccessedDate": "2022-08-30T09:00:00+09:00", "CreatedDate": "2022-08-30T12:53:12.893000+09:00", "KmsKeyIds": [ "DefaultEncryptionKey" ] }, { "VersionId": "cb804c1c-6d1r-4ii3-o48b-17f638469318", "VersionStages": [ "AWSPENDING" ], "LastAccessedDate": "2022-08-30T09:00:00+09:00", "CreatedDate": "2022-08-30T12:53:22.616000+09:00", "KmsKeyIds": [ "DefaultEncryptionKey" ] } ], - - - - - - - - - - - - - - - - - - - - - - - - # 該当のバージョンを削除 $ aws secretsmanager update-secret-version-stage --secret-id ${secret_id} --remove-from-version-id ${version_id} --version-stage AWSPENDING --region ${region} # コンソールから該当のシークレットを 「すぐにローテーションさせる」 今のところ発生はしていないのですが、万が一トラブルが発生し、DB のパスワード変更がされてしまった時には下記のコマンドを実行し、過去のパスワードを取得します。 とはいえこちらも交代ユーザーローテーションなのですぐにプロダクトから データベースに接続できなくなるわけではなく、次のローテーションが実行されるまでは基本的には問題ないと考えています。 $ aws secretsmanager get-secret-value --secret-id ${secret_id} --version-id ${version_id} --region ${region} --query 'SecretString' --output text | jq . # user と password は aws secretsmanager get-secret-value で取得したパラメータを設定する $ mysql --defaults-extra-file=/tmp/.${管理用DBユーザー名}.cnf -e "ALTER USER ${user} IDENTIFIED BY '${password}' # 接続確認 $ mysql --defaults-extra-file=/tmp/.user.cnf -e "STATUS" ここまででやるべきことのうち、下記を達成する基盤を作ることができました。 シークレットローテーションの開始、終了、成功、失敗を検知し、それを担当プロダクトに通知すること シークレットローテーションが失敗した場合にプロダクトへの影響がない状態でリカバリを完了すること 私たちの戦いはここでは終わらなかった 上記で主要機能を構築できたのですが、私たちがやるべきことはあと3つ残っています。 会社で定められたガバナンス制約の基準内にシークレットローテーションが行われること 同じ DB Cluster に登録されているユーザーに設定されるローテーションのタイミングは同じであること 会社で定められたガバナンス制約の基準にどれだけ則っているかがわかること これらを実現するために周辺の機能を開発する必要がありました。 会社で定められたガバナンス制約の基準にどれだけ則っているかがわかる仕組みの構築 これでやることは簡単に言うと全ての DB Cluster に存在するすべてのユーザーのリストを取得すること、そしてそのユーザーのパスワードの更新日がガバナンスで定められた期間内であるか、を確認することです。 各 DB Cluster にログインして下記のクエリを実行することで、ユーザーごとのパスワードの最終更新日を取得することができます。 mysql> SELECT User, password_last_changed FROM mysql.user; +----------------+-----------------------+ | User | password_last_changed | +----------------+-----------------------+ | rot_test | 2024-06-12 07:08:40 | | rot_test_clone | 2024-07-10 07:09:10 | : : : : : : : : +----------------+-----------------------+ 10 rows in set (0.00 sec) これをすべての DB Cluster で実行するわけですが、私たちはすでに日次ですべての DB Cluster のメタ情報を取得し、ER図や my.cnf を自動生成したり、不適切な設定が DB に存在していないかをチェックするスクリプトを実行しています。 ここにユーザー一覧とパスワードの最終更新日を取得して DynamoDB に保存する、という処理を追加するだけで解決できました。 DynamoDB の構成の主要なカラム構成は下記のとおりです。 DBClusterID: パーテションキー DBUserName: ソートキー PasswordLastChanged: パスワード最終更新日 実際には RDS を使用する上で自分たちがコントロールしない、自動的に作成されるユーザー シークレットローテーションの機能によって作成される「_clone」という名前を持つユーザー を弾く必要があります。そのため本当に必要なデータは下記のクエリで取得しています。 SELECT CONCAT_WS(',', IF(RIGHT(User, 6) = '_clone', LEFT(User, LENGTH(User) - 6), User), Host, password_last_changed) FROM mysql.user WHERE User NOT IN ('AWS_COMPREHEND_ACCESS', 'AWS_LAMBDA_ACCESS', 'AWS_LOAD_S3_ACCESS', 'AWS_SAGEMAKER_ACCESS', 'AWS_SELECT_S3_ACCESS', 'AWS_BEDROCK_ACCESS', 'rds_superuser_role', 'mysql.infoschema', 'mysql.session', 'mysql.sys', 'rdsadmin', ''); その上で DynamoDB の情報を集計する SLI 用の Lambda を作りました。結果としてはこんな形で出力しています。 ![SLI 通知](/assets/blog/authors/_awache/20240812/sli.png =750x) こちらの出力内容は下記のとおりです。 Total Items: すべての DB Cluster に存在するすべてのユーザーの数 Secrets Exist Ratio: KINTO テクノロジーズで使用する Secrets Manager に登録する命名規則にあった SecretID が存在する割合 Rotation Enabled Ratio: シークレットローテーションの機能が有効化されている割合 Password Change Due Ratio: 会社のガバナンスルールに則っているユーザーの割合 重要なことは Password Change Due Ratio が 100 % になることです。ここが満たされさえすればシークレットローテーションの機能を使う必要もありません。 この SLI 通知の仕組みによって下記を達成することができました。 会社で定められたガバナンス制約の基準にどれだけ則っているかがわかること 同じ DB Cluster に登録されているユーザーに設定されるローテーションを同じタイミングにするための仕組み これを仕組み化するためには二つのコードセットを書く必要がありました。 DBClusterID 毎のローテーション実行時間を決定させるための仕組み Secrets Manager に上記で決定した時間でローテーションを設定するための仕組み それぞれについて説明します。 DBClusterID 毎のローテーション実行時間を決定させるための仕組み 前提としてシークレットローテーションの実行時間は ローテーションウィンドウ と呼ばれるスケジュールで記載することができます。ローテーションウィンドウの記載方式と用途は大きく下記の2つです。 rate 式 ローテーション間隔を指定の日数で実行したい場合に使用 cron 式 特定の曜日、特定の時間に実行したいなど少し細かく指定をしたい場合はこちらを使用 私たちは平日日中帯に実行したかったこともあり、cron 式を用いて設定することとしました。 もう一つ設定すべき点はローテーションで設定する「ウィンドウ期間」です。これら二つを組み合わせてある程度ローテーションの実行タイミングをコントロールすることができます。 ローテーションウィンドウとウィンドウ期間の関係は下記のとおりです。 ローテーションウィンドウは開始時間ではなくローテーションが完了する時間 ウィンドウ期間はローテーションウィンドウで設定された時間に対してどれくらいの猶予を持たせて実行をするか ウィンドウ期間のデフォルトは 24時間 つまり、ローテーションウィンドウを毎月第4火曜日の10:00 に設定して、ウィンドウ期間を何も指定しない(24時間)とシークレットローテーションが実行されるタイミングは 毎月第4月曜日の 10:00 ~ 毎月第4火曜日の 10:00 の間のいずれかで実行されることになる となります。これは直感的に難しいのですが、この関係を理解していないと予想もしないタイミングでシークレットローテーションが実行されてしまいます。 以上の前提を念頭に置きつつ、要件を下記のとおり定めました。 DBClusterID 毎に、複数の DB ユーザーのローテーションが同じ時間帯に実行される ウィンドウ期間は 3時間とする あまり短いタイミングで設定すると万が一トラブルが発生した時のリカバリまでの時間帯に同時多発的に問題が出てしまう可能性がある 実行のタイミングは平日火曜から金曜の 09:00 ~ 18:00 の間とする 月曜日は祝日の可能性が高いため実行しない ウィンドウ期間の時間を 3時間で固定することとするため、cron 式に設定できるのは 12:00 ~ 18:00 の 6時間 cron 式に設定できるのは UTC のみ 可能な限り実行のタイミングをバラバラに設定する 同じタイミングで多くのシークレットローテーションが走ると各種 API の制限に影響を与えてしまう可能性がある 何かしらのエラーが発生した場合、一気にアラートが来ることで対応に追われてしまう Lambda 処理全体の流れとしては下記のような形になります。 データの取得 : DynamoDBから DBClusterID のリストを取得 DynamoDBから既存のシークレットローテーションの設定情報を取得 スケジュールの生成 : 週、曜日、時間のすべての組み合わせ(スロット)を初期化 対象のDBClusterID が既存のシークレットローテーションの設定情報に存在しないか確認 存在していたらその DBClusterID を既存のシークレットローテーションの設定情報と同じスロットに埋め込む 新しい DBClusterID をスロットに均等に分配する スロットに空きがあればそこに新しいデータを追加し、空きがなければ次のスロットにデータを追加 DBClusterID のリストの最後まで繰り返し実行 データの保存 : 既存のデータと重複しない新しいシークレットローテーションの設定情報をフィルタリングして保存します。 エラーハンドリングと通知 : 重大なエラーが発生した場合、Slackにエラーメッセージを送信して通知します。 これによって格納される DynamoDB のカラムは下記のとおりです。 DBClusterID: パーテションキー CronExpression: シークレットローテーションに設定する cron 式 少し分かりづらいのですがイメージとしては下記のような状態になるようにしています。 ![スロット投入イメージ](/assets/blog/authors/_awache/20240812/decide.png =750x) ここまでで DBClusterID 毎のローテーション実行時間を決定させるための仕組みができました。 これでは実際にシークレットローテーションの設定をすることはできません。なので実際にシークレットローテーションを設定する仕組みが必要となります。 Secrets Manager に上記で決定した時間でローテーションを設定するための仕組み 私たちはシークレットローテーションの仕組みだけが会社のガバナンスを守る手段だと思っていません。重要なことは会社で定められたガバナンス制約の基準が満たされていることです。そのためこの仕組みを必ず使う、という強制力を持たせるのではなく DBRE が考えた最も安全で最も簡単な仕組みとしてユーザーが使いたいと思えば使ってもらえる、そんな仕組みです。 もしかしたら DBCluster にあるユーザーの中でこのユーザーはシークレットローテーションで、このユーザーは別の方法で自分たちで管理したい、そのような要望が出てくる可能性もあります。 これを満たすためには必要な DBClusterID に紐づく データベースユーザーの単位でシークレットローテーションの設定をするコマンドラインツールが必要となります。 私たちは DBRE として日頃から行う作業をコマンドライン化した dbre-toolkit というツールを開発していました。例えば Point In Time Restore を簡単に行えるツール、Secrets Manager にある DB 接続ユーザーの情報を取得して defaults-extra-file を作成するツールなどがパッケージ化されて一つにまとまっているものです。 今回はここに一つサブコマンドを追加しました。 % dbre-toolkit secrets-rotation -h 2024/08/01 20:51:12 dbre-toolkit version: 0.0.1 指定された Aurora Cluster に紐づく Secrets Rotation スケジュールに基づいて Secrets Rotation を設定するコマンドです。 Usage: dbre-toolkit secrets-rotation [flags] Flags: -d, --DBClusterId string [Required] 対象サービスの DBClusterId -u, --DBUser string [Required] 対象の DBUser -h, --help help for secrets-rotation ここで指定された DBClusterID と DBUser の組み合わせを DynamoDB から取得してその情報を Secrets Manager に登録することでシークレットローテーションの設定を完了させる、というものです。 これによって下記を達成することができました。 会社で定められたガバナンス制約の基準内にシークレットローテーションが行われること 同じ DB Cluster に登録されているユーザーに設定されるローテーションのタイミングは同じであること そしてここまでやってようやく自分たちが定めたやるべきことの全てを完了させることができました。 まとめ ここまで私たちが実現したことは下記のとおりです。 シークレットローテーションの開始、終了、成功、失敗を検知し、それを担当プロダクトに通知すること CloudTrail に Put されるイベントを検知して適切に通知する仕組みの開発 シークレットローテーションが失敗した場合にプロダクトへの影響がない状態でリカバリを完了すること トラブル対応手順を準備 シークレットローテーションの仕組みを理解することで、基本的にはシークレットローテーションが行われて即座に致命的なエラーになる可能性は少ないことがわかった 会社で定められたガバナンス制約の基準内にシークレットローテーションが行われること SLI 通知用の仕組みの開発 シークレットローテーションの設定を確実にできるような設定ツールの開発 同じ DB Cluster に登録されているユーザーに設定されるローテーションのタイミングは同じであること DBClusterID 単位でシークレットローテーションに設定する cron 式を DynamoDB に保存する仕組みを開発 会社で定められたガバナンス制約の基準にどれだけ則っているかがわかること SLI 通知用の仕組みの開発 全体像としては下記のような形になりました。 ![全体像](/assets/blog/authors/_awache/20240812/secrets_rotation_overview.png =750x) 想像以上に複雑です。。私たちはマネージドでシークレットローテーションを実現することをある意味簡単に考えすぎていたとも言えます。 AWS の提供するシークレットローテーションの機能は単純に利用するだけならすぐにできるとても強力な仕組みです。 ただ、私たちが実現したいことを本気でやろうとすると一筋縄ではいかず様々な要素を自分たちで内製する必要があります。ここに至るまでには本当にさまざまなトライアンドエラーがありました。 こうしてできたシークレットローテーションの仕組みを利用して KTC の データベースが誰でも簡単に、そして誰も気にしなくてもいい感じに安全に運用し続けられる、そんな環境を作っていければと思っています。 KINTO テクノロジーズ DBRE チームでは一緒に働いてくれる仲間を絶賛募集中です!カジュアル面談も歓迎ですので、 少しでも興味を持っていただけた方はお気軽に  X の DM  等でご連絡ください。併せて、 弊社の採用 X アカウント  もよろしければフォローお願いします!
アバター
Hello, I am _awache ( @_awache ), from DBRE at KINTO Technologies (KTC). In this article, I’ll provide a comprehensive overview of how I implemented a safe password rotation mechanism for database users primarily registered in Aurora MySQL, the challenges I encountered, and the peripheral developments that arose during the process. To start, here's a brief summary, as this will be a lengthy blog post. Summary Background Our company has implemented a policy requiring database users to rotate their passwords at regular intervals. Solution Considered MySQL Dual Password: To set primary and secondary passwords by using Dual Password function that is available in MySQL 8.0.14 and later. AWS Secrets Manager rotation function: To enable automatic update of passwords and strengthened security by using Secrets Manager Adopted Rotation function of AWS Secrets Manager was adopted for its easy setting and comprehensiveness. Project Kickoff At the beginning of the project, we created an inception deck and clarified key boundaries regarding cost, security, and resources. What was developed in this project Lambda functions After thorough research, we developed multiple Lambda functions because the AWS-provided rotation mechanism did not fully meet KTC's requirements. Lambda function for single user strategy Purpose: To rotate passwords for a single user Settings: Managed by Secrets Manager. These functions execute at the designated rotation times in Secrets Manager to update passwords. Lambda function for alternate users rotation strategy Purpose: This function updates passwords for two users alternately to enhance availability. Settings: Managed by Secrets Manager. In the initial rotation, a second user (a clone) is created; passwords are switched in subsequent rotations. Lambda function for Secret Rotation Notifications Purpose: this function reports the results of secret rotations. Trigger: CloudTrail events for RotationStarted, RotationSucceeded, and RotationFailed Function: To store the rotation results in DynamoDB and send notifications to Slack. Additionally, it posts a follow-up message with a timestamp to the Slack thread. Lambda function for Managing DynamoDB storage of rotation results Purpose: To store rotation results in DynamoDB as evidence for submission to the security team. Function: Executes in response to CloudTrail events to save the rotation results to DynamoDB and send SLI notifications based on the stored data. Lambda function for SLI notification Purpose: To monitor the status of rotation and to send SLI notifications. Function: Retrieves information from DynamoDB to track the progress of secret rotation and sends notifications to Slack as needed. Lambda function for rotation schedule management Purpose: To determine the execution time of rotation for a DBClusterID. Function: Generates a new schedule based on the settings of existing secret rotations, saves it to DynamoDB, and sets the rotation window and duration. Lambda function for applying rotation settings Purpose: To apply the scheduled rotation settings to Secrets Manager Function: Configures secret rotation at the designated times using information from DynamoDB. A Tool for Registering Secret Rotations We developed an additional tool to facilitate local registration of secret rotations. Tool for setting Secrets Rotation schedule Purpose: To set secret rotation schedules per database user. Function: Applies the secret rotation settings based on data saved in DynamoDB for the specified DBClusterID and DBUser. Final Architecture Overview We initially believed it could be done much simpler, but it turned out to be more complex than expected... ![The whole image](/assets/blog/authors/_awache/20240812/secrets_rotation_overview.png =750x) Results Automated the entire secret rotation process, reducing security and management efforts. Developed a comprehensive architecture that meets governance requirements. Leveraged secret rotation to enhance database safety and efficiency, with ongoing improvement efforts. Now, let's explore the main story. Introduction KTC has implemented a policy requiring database users to rotate their passwords at regular intervals . However, rotating passwords is not a straightforward process. To change a database user's password, the system must first be stopped. Then, the password in the database is updated, system settings files are adjusted, and finally, system operations must be verified. In other words, we need to perform a maintenance operation that provides no direct value by stopping the system just to change a database user's password. It would be highly inconvenient to perform this for every service at extremely short intervals. This article explains how we addressed this challenge through specific examples. Solution Considerations We considered two major solutions. To use functions of MySQL Dual Password To make use of the rotation function of Secrets Manager MySQL Dual Password The Dual Password function is available in MySQL starting from version 8.0.14. Using this function allows us to set both a primary and a secondary password, enabling password changes without stopping the system or applications. Simple steps to use Dual Password function are as follows: Set a new primary password. You can use the command ALTER USER 'user'@'host' IDENTIFIED BY 'new_password' RETAIN CURRENT PASSWORD; while keeping the current password as the secondary one. Update all applications to be connected with the new password. Delete the secondary password by ALTER USER 'user'@'host' DISCARD OLD PASSWORD; . Rotation function of Secrets Manager AWS Secrets Manger supports periodical automatic update of secrets. Activating secret rotation not only reduces efforts to manage passwords manually but also helps to enhance security. To activate it, one only needs to configure the rotation policy in Secrets Manager and assign a Lambda function to handle the rotation. ![Rotation setting screen](/assets/blog/authors/_awache/20240812/secrets_rotation_overview.png =750x) Lambda rotation function Creating the rotation function By automatically deploying the code provided by AWS, we can use it immediately without the need to create custom Lambda functions. Using rotation function from Account You can either create a custom Lambda function or select the one created earlier under 'Creating the Rotation Function' if you wish to reuse it. Rotation strategy Single user Method to rotate passwords for a single user. The database connection is maintained, allowing authentication information to be updated and reducing the risk of access denial with an appropriate retry strategy. After rotation, new connections require the updated authentication information (password). Alternate user Initially, I found it challenging to grasp the alternate user strategy, even after reading the manual. However, after careful consideration, I’ve articulated it as follows: This method alternates password updates by rotation, where the authentication information (a combination of username and password) is updated in a secret. After creating a second user (a clone) during the initial rotation, the passwords are switched in subsequent rotations. This approach is ideal for applications that require high database availability, as it ensures that valid authentication information is available even during rotations. The clone user has the same access rights as the original user. It's important to synchronize the permissions of both users when updating their access rights. Below is an image illustrating the concept explained above. Changes before and after rotation ![Before/after rotation](/assets/blog/authors/_awache/20240812/rotation_exec.png =750x) Though it may be a bit difficult to see, the username will have '_clone' appended during password rotation. In the first rotation, a new user with the same privileges as the existing user is created on the database side. The password will continue to be updated by reusing it in subsequent rotations after the second one. ![Alternate user](/assets/blog/authors/_awache/20240812/multi_user_rotation.png =750x) The Solution Adopted We decided to use rotation function by Secrets Manager for the following reasons: Easy to set up MySQL Dual Password The updated password must be applied to the application after preparing a script for the password change. Rotation function of Secrets Manager The product side does not need to modify code as long as the service consistently retrieves connection information from Secrets Manager. Comprehensiveness MySQL Dual Password Supported only in MySQL 8.0.14 and later (Aurora 3.0 or later) Secrets Manager Rotation Function Supports all RDBMS used by KTC Amazon Aurora Redshift Providing additional support beyond database passwords Can also manage API keys and other credentials used in the product. Toward the Project Kickoff Before starting the project, we first clarified our boundaries for cost, security, and resources to determine what should and shouldn’t be done. We also created an inception deck. The following is outline of what was discussed: Breakdown of responsibilities Topic Product team DBRE team Cost - Responsible for the cost of Secrets Manager for storing database passwords - Responsible for the cost associated with the secret rotation mechanism. Security - Products using this mechanism must always retrieve database connection information from Secrets Manager. - After a rotation, connection information must be updated by redeploying the application and other components until the next rotation occurs. - Ensuring that rotations are completed within the company's defined governance limits. - Providing records of secret rotations to the security team as required. - Passwords must not be stored in plain text to maintain traceability. - Sufficient security must be maintained in the mechanism used for rotation. Resources - Ensuring that all database users are managed by Secrets Manager. - Ensuring that the implementation of secret rotation resources is executed with the minimum necessary configuration. What needed to be done Execute secret rotation within the company’s defined governance limits. Detect and notify the start, completion, success, or failure of a secret rotation to the relevant product teams. Ensure recovery from a failed secret rotation without affecting the product. Align rotation timing with the schedule set by users registered in the same DB Cluster. Monitor compliance with the company’s governance standards. Inception deck (an excerpt) Why are we here To develop and implement a system that complies with the company’s security policy and automatically rotates database passwords at regular intervals. To strengthen security, reduce management efforts, and ensure compliance through automation. Led by the DBRE team, to achieve safer and more efficient password management by leveraging AWS's rotation strategy. Elevator pitch Our goal is to reduce the risk of security breaches and ensure compliance. We offer a service called Secret Rotation, designed for product teams and the security group, to manage database passwords. It has functions to strengthen automatic security and reduce effort to manage, Unlike MySQL’s Dual Password feature, It is compatible with all AWS RDBMS option Through AWS services, we utilize the latest cloud technologies to provide flexible and scalable security measures that meet enterprise data protection standards. Proof of Concept (PoC) To execute the PoC we prepared the necessary resources in our testing environment, such as a DB Cluster for our own verification. We discovered that implementing the rotation mechanism through the console was straightforward, leading us to anticipate a rapid deployment of the service with high expectations. However, at that time, I had no way of knowing that trouble was just around the corner... Architecture Providing secret rotation alone is not enough without a notification mechanism for users. I’ll introduce an architecture that includes this essential feature. Secret Rotation Overview ![The whole architecture](/assets/blog/authors/_awache/20240812/secrets_rotation_archtecture.png =750x) Secret rotation will be managed by secrets registered in Secrets Manager. Here’s an example of a monthly update for clarity. In this case, the same password can be used for up to 2 months due to the monthly rotation schedule. During this period, you will achieve compliance with the company's rotation rules with minimal effort, aligning with any necessary deployment timing for product releases. Rotation Results to be stored at DynamoDB In Secret Rotation, a status will be written to CloudTrail as an event by the following timing: Process start; RotationStarted Process failure; RotationFailed Process end; RotationSucceeded See log entries for rotation as there are additional details available. We configured a CloudWatch Event so that the above events would serve to trigger the Lambda function for notification. Below are some of the Terraform code snippets used: cloudwatch_event_name = "${var.environment}-${var.sid}-cloudwatch-event" cloudwatch_event_description = "Secrets Manager Secrets Rotation. (For ${var.environment})" event_pattern = jsonencode({ "source" : ["aws.secretsmanager"], "$or" : [{ "detail-type" : ["AWS API Call via CloudTrail"] }, { "detail-type" : ["AWS Service Event via CloudTrail"] }], "detail" : { "eventSource" : ["secretsmanager.amazonaws.com"], "eventName" : [ "RotationStarted", "RotationFailed", "RotationSucceeded", "TestRotationStarted", "TestRotationSucceeded", "TestRotationFailed" ] } }) ``` Stored rotation results can be used as evidence for submission to the security team. The architecture reflecting the components discussed so far is as follows: ![Architecture only for Secret Rotation](/assets/blog/authors/_awache/20240812/secrets_rotation_archtecture2.png =750x) AWS resources needed for providing functions Lambda function for applying alternate user strategy (Different Lambda functions are required for MySQL and Redshift.) Lambda function for alternate user to be set at Secrets Manager We developed this in-house to meet company rules for infrastructure compliance. We encountered several elements that automatically-generated Lambda functions could not address, such as Lambda function settings and IAM configurations. Lambda function to apply single user strategy (Different Lambda is needed for MySQL and Redshift respectively) Lambda function for single user to be set at Secrets Manager A password for administrator user cannot be applied with alternate user strategy Lambda function for Secret Rotation Notifications A mechanism to notify that it has been rotated by Secret Rotation must be prepared by ourselves. As CloudTrail is stored with the status and results, we can use them as a trigger to notify to Slack. Be careful that Lambda will be executed individually when executed by an event trigger. DynamoDB for storing rotation results Results of rotation to be stored in DynamoDB Additionally, the timestamp is stored in the Slack thread to clarify which notification it is related to. Why we chose to manage the Lambda function for secret rotation ourselves As a prerequisite, we use AWS-provided Lambda. Since AWS provides the ability to automatically deploy code, we can use it immediately without the need to create individual Lambda functions. However, we deploy it using Terraform after committing the code set to our repository. Main reasons for this are as follows: Multiple services exist within KTC's AWS account. When several services exist in the same AWS account, IAM’s privilege becomes too strong Also, services are provided across regions As Lambda cannot be executed in cross-region, the same code must be deployed to regions by using Terraform. We have a large number of database users that require Secret Rotation settings. Number of database clusters Below 200; Number of database users Below 1000 The workload would be overwhelming if we manually built the system for each secret. Applying Company Rules It calls for setting of Tag in addition to IAM Automatic and individual creation will require setting up of Tag subsequently AWS-provided code will be updated periodically. Since the codes are provided by AWS, this inevitably happens. There is a possibility that this will lead to a trouble by chance I have written several matters so far, but in a nutshell, it was more convenient for us to manage the codes in consideration of the in-company rules. How we managed Lambda functions for Secrets Rotation This was really a hard job. At the beginning, we thought it would go easily as AWS provided samples of Lambda codes . But we saw many kinds of errors after deploying based on them. While we had some success during our own verification, we faced significant challenges when errors occurred in specific database clusters. However, we discovered that the automatically generated code from the console was error-free and remained stable, highlighting the need to use it effectively. There are several approaches, but let me share the one we tried. Exploring how to deploy from a sample code We can see the code itself from the above mentioned link However, it is hard to match all the necessary modules including version. Besides, this Lambda code is frequency updated and we have to follow up. We gave up this approach as it was a hard job. Then, we realized it would be better off if make it inhouse with other method as long as we need to control this code. Download the Lambda code after automatically generating the Secret Rotation function from the console. This method is to generate code automatically every time, download it to local to apply it to our Lambda. It is not too difficult to do. However, there is a chance that existing and working code may change from a downloaded code by timing of automatic code generation. This approach would have worked, but we found it burdensome to deploy every time the code needed updates. Verify how it was deployed from the CloudFormation template used behind the scenes when the Secret Rotation function is automatically generated from the console. When automatically generated from the console, AWS CloudFormation operates in the background. By examining the template at this stage, we can obtain the S3 path of the code automatically generated by AWS. We adopted the third method above as it was the most efficient way to directly obtain the Zip file from S3, eliminating the need to generate Secret Rotation code each time. The actual script to download from S3 are as follows: #!/bin/bash set -eu -o pipefail # Navigate to the script directory cd "$(dirname "$0")" source secrets_rotation.conf # Function to download and extract the Lambda function from S3 download_and_extract_lambda_function() { local s3_path="$1" local target_dir="../lambda-code/$2" local dist_dir="${target_dir}/dist" echo "Downloading ${s3_path} to ${target_dir}/lambda_function.zip..." # Remove existing lambda_function.zip and dist directory rm -f "${target_dir}/lambda_function.zip" rm -rf "${dist_dir}" if ! aws s3 cp "${s3_path}" "${target_dir}/lambda_function.zip"; then echo "Error: Failed to download file from S3." exit 1 fi echo "Download complete." echo "Extracting lambda_function.zip to ${dist_dir}..." mkdir -p "${dist_dir}" unzip -o "${target_dir}/lambda_function.zip" -d "${dist_dir}" cp -p "${target_dir}/lambda_function.zip" "${dist_dir}/lambda_function.zip" echo "Extraction complete." } # Create directories if they don't exist mkdir -p ../lambda-code/mysql-single-user mkdir -p ../lambda-code/mysql-multi-user mkdir -p ../lambda-code/redshift-single-user mkdir -p ../lambda-code/redshift-multi-user # Download and extract Lambda functions download_and_extract_lambda_function "${MYSQL_SINGLE_USER_S3_PATH}" "mysql-single-user" download_and_extract_lambda_function "${MYSQL_MULTI_USER_S3_PATH}" "mysql-multi-user" download_and_extract_lambda_function "${REDSHIFT_SINGLE_USER_S3_PATH}" "redshift-single-user" download_and_extract_lambda_function "${REDSHIFT_MULTI_USER_S3_PATH}" "redshift-multi-user" echo "Build complete." By running this script at the time of deployment, the code can be updated. Conversely, the conventional code can be used continuously unless running this script. Lambda function and Dynamo DB to notify results of Secret Rotation A notification of Secret Rotation results is executed with PUT of CloudTrail as a trigger. We considered modifying the Lambda function for Secret Rotation to simplify things. However, this would have complicated explaining our effort to fully utilize the code provided by AWS. Before starting development, I initially thought all we needed was to use a PUT trigger for notifications. But, things were not that easy. Let’s see the whole picture again. ![The whole architecture](/assets/blog/authors/_awache/20240812/secrets_rotation_archtecture.png =750x) Its notification process involves creating a Slack notification thread at the start and adding a postscript to the thread when the notification is completed. ![Slack notification](/assets/blog/authors/_awache/20240812/secrets_rotation_overview.png =750x) Events we use this time are as follows: Event at the start of the processing Event of PUT to Cloud Trail RotationStarted Event at the end of the processing Event of PUT to Cloud Trail when the processing succeeds RotationSucceeded Event of PUT to Cloud Trail when the processing fails RotationSucceeded On the occasion of RotationStarted, an event at the start of the processing, its Slack time stamp can be stored in DynamoDB and we can add postscripts on the thread by using it. Considering these, we had to examine by which unit DynamoDB would become unique. Consequently, we chose to combine SecretID of Secrets Manager and scheduled date of the next rotation to make it unique. Main structure of columns of DynamoDB is as follows: (In actual, more information is being stored in them) SecretID: Partition key NextRotationDate: Sort key Schedule of the next rotation; Obtainable with describe SlackTS: Time stamp sent first by Slack at the event of RotationStarted Using this time stamp, we can add postscript on the Slack thread. VersionID: Version of SecretID at the event of RotationStarted By keeping the last version to reverse to the previous state at once if a trouble happens, it is possible to restore the password information before the rotation The biggest challenge we faced was that multiple Lambda functions were triggered in steps due to several PUT events being activated during a single Secret Rotation process. Even though i understood this in theory, it proved to be extremely troublesome. We had to pay attention to the following consequently: Processing of Secret Rotation itself is a very high-speed one. Since the timing of PUT to Cloud Trail is almost identical for RotationStarted and RotationSucceeded (or RotationFailed), the execution of Lambda for notification will take place twice, almost simultaneously. But Lambda for notification also handles Slack notification and DynamoDB registration, an event at the processing end may run before the RotationStarted process completes. When this happens, a new script will be added to Slack without knowing the destination thread. To solve this, we chose a simpler approach where processing to notify Slack should be halted for a couple of seconds in case of the name of event is other than RotationStarted. Secret Rotation may fail due to an error of setting and such. In most cases, a product will not be affected by this at once as it becomes an error before DB password updating. In such a case, a recovery can be executed with the following command. # VersionIdsToStages obtains the version ID of AWSPENDING $ aws secretsmanager describe-secret --secret-id ${secret_id} --region ${region} - - - - - - - - - - Output sample of Versions - - - - - - - - - - "Versions": [ { "VersionId": "7c9c0193-33c8-3bae-9vko-4129589p114bb", "VersionStages": [ "AWSCURRENT" ], "LastAccessedDate": "2022-08-30T09:00:00+09:00", "CreatedDate": "2022-08-30T12:53:12.893000+09:00", "KmsKeyIds": [ "DefaultEncryptionKey" ] }, { "VersionId": "cb804c1c-6d1r-4ii3-o48b-17f638469318", "VersionStages": [ "AWSPENDING" ], "LastAccessedDate": "2022-08-30T09:00:00+09:00", "CreatedDate": "2022-08-30T12:53:22.616000+09:00", "KmsKeyIds": [ "DefaultEncryptionKey" ] } ], - - - - - - - - - - - - - - - - - - - - - - - - # Delete the subject version $ aws secretsmanager update-secret-version-stage --secret-id ${secret_id} --remove-from-version-id ${version_id} --version-stage AWSPENDING --region ${region} # From the console, to make the subject secret “rotate at once” Although this has not occurred, if the database password is changed due to an issue, we execute the following command to retrieve the previous password. Since we also use alternate user rotation, it doesn't immediately disable product access to the database. We believe it won't be an issue until the next rotation is executed. $ aws secretsmanager get-secret-value --secret-id ${secret_id} --version-id ${version_id} --region ${region} --query 'SecretString' --output text | jq . For # user and password, we will set a parameter obtained by aws secretsmanager get-secret-value $ mysql --defaults-extra-file=/tmp/.$DB username for administration}.cnf -e "ALTER USER ${user} IDENTIFIED BY '${password}' # Check connection $ mysql --defaults-extra-file=/tmp/.user.cnf -e "STATUS" As for the things to do up to here, we were able prepare a foundation to achieve the following: Detect and notify the start, completion, success, or failure of a secret rotation to the relevant product teams. Ensure recovery from a failed secret rotation without affecting the product. Our battle did not stop here Although we could prepare the major functions as described, we identified three additional tasks that we needed to address. Execute secret rotation within the company’s defined governance limits. Align rotation timing with the schedule set by users registered in the same DB Cluster. Monitor compliance with the company’s governance standards. In order to achieve them, we had to develop peripheral functions. To build a mechanism to monitor the degree of compliance has been observed for the standard of the governance constrains defined by the company What we should do in this is, in a nutshell, to obtain lists of all users existing in every DB Cluster, and to check if dates of password updating for every user should be within a duration required by corporate governance. We can obtain the latest password updating date of every user after logging in each DB Cluster and executing the following query. mysql> SELECT User, password_last_changed FROM mysql.user; +----------------+-----------------------+ | User | password_last_changed | +----------------+-----------------------+ | rot_test | 2024-06-12 07:08:40 | | rot_test_clone | 2024-07-10 07:09:10 | : : : : : : : : +----------------+-----------------------+ 10 rows in set (0.00 sec) This should be executed in every DB Cluster. However, we have already obtained metadata of all DB Clusters every day and automatically generated Entity Relationship Diagram and my.cnf, and executed a scrip to check if there is any inappropriate settings in database. We could solve this simply by adding a processing to obtain lists of users and the latest password updating dates to save them in DynamoDB. Main structure of columns of DynamoDB is as follows: DBClusterID: Partition key DBUserName: Sort key PasswordLastChanged: Latest password updating date In practice, Users automatically generated for the use of RDS but we cannot not control Users with the name of “_clone” generated by Secret Rotation function The above users should be excluded. For this reason, we obtain the really necessary data by the following query. SELECT CONCAT_WS(',', IF(RIGHT(User, 6) = '_clone', LEFT(User, LENGTH(User) - 6), User), Host, password_last_changed) FROM mysql.user WHERE User NOT IN ('AWS_COMPREHEND_ACCESS', 'AWS_LAMBDA_ACCESS', 'AWS_LOAD_S3_ACCESS', 'AWS_SAGEMAKER_ACCESS', 'AWS_SELECT_S3_ACCESS', 'AWS_BEDROCK_ACCESS', 'rds_superuser_role', 'mysql.infoschema', 'mysql.session', 'mysql.sys', 'rdsadmin', ''); In addition, we prepared a Lambda for SLI to gather information of DynamoDB. Consequently, the output is like this: ![SLI notification](/assets/blog/authors/_awache/20240812/sli.png =750x) Its output content is as follows: Total Items: The number of all users existing in all DB Clusters Secrets Exist Ratio: Ratio of SecretIDs that comply with the naming rule for Secrets Manager used in KINTO Technologies Rotation Enabled Ratio: Ratio of activated Secret Rotation functions Password Change Due Ratio: Ratio of users who comply with the corporate governance rule The important thing is to make Password Change Due Ratio 100%, There is no need to depend on Secret Rotation function as long as this ratio is 100%. With this SLI notification mechanism, we can achieve the following: Monitor compliance with the company’s governance standards. A mechanism to synchronize rotation timing with the schedule set by users registered in the same DB Cluster. We had to write two code sets to realize this mechanism. A mechanism to decide the execution time of rotation for a DBClusterID. A mechanism to set a rotation on Secrets Manager by the time determined by the above Each of these is described below. The mechanism to decide the execution time of rotation for a DBClusterID. On the assumption, execution time of Secret Rotation can be described by a schedule called rotation window . Description and the usage of rotation window can be summarized into two as follows: rate equation This is used when we want to set a rotation interval by a designated number of days cron equation This is used when we want to set a rotation interval in detail such as specific day of the week or time. We decided to use cron equation as we wanted to execute our setting in daytime of weekdays. Another point to set is “window duration” of a rotation. By combining these two, we can control the execution timing of a rotation to some extent. The relation between rotation window and window duration is as follows: Rotation window means the time when a rotation ends, not starts Window duration determines allowance for execution against the set up time by the rotation window Window duration’s default is 24 hours That means, if the rotation window is set at 10:00AM of the fourth Tuesday every month but the widow duration is not specified (24 hours), the timing for Secret Rotation will be executed sometime between 10:00AM of the fourth Monday and 10:00AM of the fourth Tuesday every month, as a case. This is hard to follow intuitively. But, if we don’t get this relationship, Secret Rotation may be executed at unexpected timing. With those assumption in mind, we determined the requirement as follows: Rotation for DB users by DBClusterID will be executed at the same timezone Window duration is for three hours Setting by too short timing may lead to see problems occurring simultaneously during a timezone from a trouble reporting to its recovery Timing of the execution is set at between 09:00 to 18:00 of weekdays Tuesdays to Fridays We don’t execute on Mondays as it is more likely that a public holiday falls on that day. As the window duration is going to be fixed as three hours, what can be set in cron equation is six hours between 12:00-18:00 Only UTC can be set in cron equation Timings of execution should be dispersed as much as possible This is because many Secret Rotations run at the same timing, restrictions of various API may be affected. And if an error of some kind may occur, many alerts will be activated and we cannot respond to them at the same time The whole flow of Lambda processing will be as follows: Data acquisition : Acquire a DBClusterID list from DynamoDB Acquire setting information of existing Secret Rotation from DynamoDB Generation of schedule Initialize all combination (slots) of week, day and hour Check if the subject DBClusterID does not exist in the setting information of existing Secret Rotation If it exists, embed DBClusterID in the same slot of setting information of existing Secret Rotation Distribute new DBClusterID to slots evenly Add new data to empty slot and if it is not empty, add data to the next slot Execute repeatedly until the last one of DBClusterID list Storing data : Data is stored after filtering setting information of the new Secret Rotation that does not duplicate with the existing data. Error handing and notification : When a serious error occurs, an error message is sent to Slack for notification. Then, DynamoDB’s column to be stored is as follows: DBClusterID: Partition key CronExpression: cron equation to set at Secret Rotation It’s a bit hard to follow, but we make a state as follows, as an image: ![Slot putting in image](/assets/blog/authors/_awache/20240812/decide.png =750x) A mechanism to decide the execution time of rotation for a DBClusterID up to here. However, this doesn’t work to set up the actual Secret Rotation. Then, we need a real mechanism to set up Secret Rotation. The mechanism to set a rotation on Secrets Manager by the time determined by the above We don’t believe that a mechanism of Secret Rotation is the only means to keep the corporate governance. More important thing is to see compliance with the governance standard defined by the company Accordingly, instead of enforcing to use this mechanism, we need a mechanism that make our users want to use it as the safest and simplest one conceived by DBRE. Perhaps, we may find such requests from the users in DBCluster, like one user wishes to use Secret Rotation, while the other use insists to manage by themselves with different method. To satisfy such requests, we will need a command line tool for setting of Secret Rotation in the unit of database user linked to DBClusterID required. We have been developing a tool called dbre-toolkit for converting our daily work to command lines as DBRE. This is a package of tools such as the one to execute Point In Time Restore easily, the one to acquire DB connecting users in Secrets Manager to create defaults-extra-file . This time, we added a subcommand here: % dbre-toolkit secrets-rotation -h 2024/08/01 20:51:12 dbre-toolkit version: 0.0.1 It is a command to set Secrets Rotation based on Secrets Rotation schedule linked to a designated Aurora Cluster. Usage: dbre-toolkit secrets-rotation [flags] Flags: -d, --DBClusterId string [Required] DBClusterId of the subject service -u, --DBUser string [Required] a subject DBUser -h, --help help for secrets-rotation It was intended to complete a setting of Secret Rotation by registering the information to Secrets Manager after acquiring a combination of DBClusterID and DBUser as designated from DynamoDB. We could achieve the following with this: Execute secret rotation within the company’s defined governance limits. Align rotation timing with the schedule set by users registered in the same DB Cluster. We completed what we had decided finally by doing all these. Conclusion Here’s what we have achieved: We developed a mechanism to detect and notify relevant product teams about the start, completion, success, or failure of a secret rotation. This involved creating a system to detect CloudTrail PUT events and notify appropriately. Ensure recovery from a failed secret rotation without affecting the product. We prepared steps to handle potential issues. We found that understanding how Secret Rotation works helps minimize the risk of fatal errors. Execute secret rotation within the company’s defined governance limits. To develop a mechanism for SLI notification. We implemented a mechanism to perform secret rotation within the company’s defined governance limits. Synchronize rotation timing with the schedule set by users registered in the same DB Cluster. We developed a mechanism to store cron expressions to DynamoDB as an equation for setting to Secret Rotation in the unit of DBClusterID. Monitor compliance with the company’s governance. To develop a mechanism for SLI notification. The whole image became like this as follows: ![The whole image](/assets/blog/authors/_awache/20240812/secrets_rotation_overview.png =750x) It is more complex that we imagined. In other words, we can say that we had thought a managed Secret Rotation too simple in a sense. The function of Secret Rotation provided by AWS is very effective if you simply use it. However, we discovered that we needed to prepare many elements in-house because the out-of-the-box solution did not fully meet our requirements. We went through numerous trials and errors to reach this point. We aim to create a corporate environment where everyone can use the KTC database seamlessly with the Secret Rotation mechanism we've developed. We also we strive to ensure that the database can be used safely and continuously. KINTO Technologies’ DBRE team is currently recruiting new team mates! We welcome casual interviews as well. If you're interested, please feel free to contact us via DM on X . In addition, we wish you to follow our corporate exclusive X account for recruitment !
アバター
Introduction Hello everyone. I am Mori from the Tech Blog Team, now the Technical Public Relations Group. Starting this April, we’ve rebranded our team as the ‘Technical Public Relations Group’ ✨ Thank you for your ongoing support‍ ️🙇‍♀️ I’ve covered my other projects in separate articles. If you’re interested, please feel free to check them out 👀 Compliance with GDPR in the Global KINTO GDPR compliance: Implementing a Cookie Consent Pop-up on a Global Website Getting Started On January 31, 2024, KINTO Technologies (KTC) held its first company-wide offline gathering as a 2024 kick off 🎉 We handled every aspect of this event bottom-up from start to finish. Here’s a behind-the-scenes look at how we put together this large-scale meeting. I’m aiming to document this for future reference, but I also hope it also serves as a helpful guide for anyone tasked with organizing an in-house event. I should have published this earlier this year, but due to my slower writing pace, it’s coming out about six months late. Sorry about that 🙇‍♀️ (I know timeliness is crucial for event management articles... 😭) Why we decided to organize the event During the COVID-19 pandemic, the number of employees increased dramatically, and we now have about 350 employees. At this scale, it is difficult to create a sense of connection and unity, and there were more calls for offline and team-building events than before. In addition, since there are not many opportunities to send messages from top management, it took time for the overall vision to spread. Given these challenges, three team members who frequently handle event management started planning, noting, ‘Since it’s post-COVID-19, an opportunity for all employees to gather might help improve these issues a bit.’ This was early November last year. First, the outline Since it was decided to be held in January, there was only three months for us three until the implementation, so the schedule was quite tight. We decided to create a rough schedule as follows: First of all, in order to gain approval for holding the event itself, we considered the outline of the project as below: Event Purpose 2023 recap and kick off 2024. Share the company-wide vision and encourage cross-departmental communication to foster unity within the organization. Event Agenda Expanded version of the monthly All-Employee Meeting (Development Organization Headquarters All-Hands) Online participation possible for the first half (within work hours) Offline-only social gathering (outside of work hours) ** Content ** | Category | Time | Contents | Note | | ---- | ---- | --- | ---- | | --- | | | Rehearsal | 15:00-16:00 | Venue Setup/Rehearsal | Sound preparation and coordination, etc | | | 16:00 ​-16:30 Admission to Reception process ​ | Attendee Reception | | Main Part | 16:30-16:35 ​ | Opening | | | | 16:35 ​-16:40 ​ ​ | Looking Back on 2023(Vice President) | Review of 2023 and the Outlook for 2024 | | | 16:40-17:30 ​ ​ | 2023 Kanji of the Year| Was also held at the end of 2022. Review of each group | | | | 17:30 ​-17:40 ​ | Break/Preparing for Presentations​​ | | | | | 17:40-18:35 ​ K-1 Grand Prix ​​​| Each division presents their 2023 highlight project and awards will be given to the best presentations! | | | | 18:35 ​-18:45 ​ | ​Break​​​​ | | | | | 18:45-19:00| K-1 Grand Prix Result Announcement ​​​​​| Awards and Comments from the Winners | | | | 19:00 ​-19:05 ​ | Summary(President) | 2023 Summary and prospects for 2024 | | Reception | 19:05-19:20 | Take Pictures/ Break/ Room Layout Change | | | | | 19:20 ​-20:50 ​ | Social Gathering​| Toast & Kagami biraki (ceremonial opening of a sake barrel) ・ Company-wide interaction, mini games included! ​ | | | 20:50-21:00| Cleanup​​​ ​ | 21:00 Leave | Engage Every Team! Since the outline had been decided, we announced it internally in order to get an idea of total number of employees who would be willing to attend. Typically, in-house events are announced to the entire company via Slack, but this is a company-wide event. As such, it requires collaboration from every group to succeed️🤦‍♀️ So, we asked each group to designate a person to coordinate the rest of the teams. Having one contact person per group allowed us to gather responses more efficiently, avoiding the need for repeated announcements from management. This approach enabled us to collect responses smoothly and within the deadline. Thank you very much to everyone in charge of the groups! We really appreciated your support😭❤️ ! [announce] (/assets/blog/authors/M.Mori/20240611/announce.png =500x) the announcement in my department More offline participation than expected! Since this event was set up as a Development Organization Headquarters meeting, that is, a meeting of all employees, everyone was essentially required to participate. We expected to have people who would inevitably participate online due to family reasons or business trips, but even so, we needed a venue with a capacity of 300 people. We struggled to find a venue near our office, but after a series of searches and repeated calls, we were able to miraculously book the "Kanda Square Hall" , just a 5-minute walk from our Jimbocho office. ! [Hall] (/assets/blog/authors/M.Mori/20240611/square_hall.jpg =500x) _ A very beautiful venue. Thank you, Kanda Square._ Due to the unavoidable online participation and the need for an English interpretation channel (described later), we decided to stream the All-Employee Meeting part of the event as an online webinar. Thank you very much 😭❤️ to everyone in charge of the broadcast.️ Each role performed its tasks simultaneously! When organizing an event, we typically divide the support team into smaller teams and assign tasks to each. The great thing about our KINTO Technologies is that once teams are assigned a task, they are self-directed and proactive in their approach! It was very helpful because they took initiative and openly shared their opinions. For this event, several of the previously mentioned group representatives were assigned to multiple roles. Role Task Details Overall Coordinators Colleagues in charge of overall coordination and providing advice to other groups in charge when needed. Moderators In charge of facilitating and energizing the entire event (the most crucial role!) Reception Team reviewing effective crowd management strategies, guiding attendees around the venue, and coordinating the information to be displayed throughout the event. Interpretation Coordinators Team managing communication with external English interpreters for our international team members. Kanji of the Year team Gathering all proposals from each group for a kanji representing the year 2023 to be presented at our ‘Kanji of the Year’ segment. K-1 Grand Prix team Helping gather the information and slides about the projects that will be presented per department. Summary of President and Vice President's greetings Team creating the slides of the President and Vice President messages together with the purpose of the event. Social Gathering Team coordinating the catering and what activities to do at the social gathering. Novelties Team creating the novelty items and giveaways that will be distributed to everyone. Moderators This time, we enlisted three experienced moderators to be our presenters and energize the event. I’ll share more details about how the event went in a future article. According to the agenda, they divided and assigned themselves the segments they would cover, determined the slides needed for each segment, and planned how to engage and get everyone excited. Although we only had a rough schedule, they identified key concerns for moderating, created their own scripts, and more. They took on many tasks independently, even though we didn’t explicitly ask for them. I was truly impressed 😭❤️ ! [shinko] (/assets/blog/authors/M.Mori/20240611/shikai_shinko.png =500x) _ List of points of concern _ ! [Script] (/assets/blog/authors/M.Mori/20240611/shikai_script.png = 500x) _Moderator’s script _ Reception Even though it’s an internal event, efficient reception is crucial when dealing with such a large number of people. Five team members volunteered as the main receptionists, (and even more assisted us on the day of the event!!) At a reception, smooth guidance is key! I believe it determines the first impression of an event. The longer attendees have to wait to be checked in, the more frustrated they may become with the overall experience. In this case, we improved the process by involving the attendees on their own check-in as much as possible, rather than just the reception staff marking 〇 or X manually or handing out the event swags. We implemented the following flow for its process: By creating in advance a layout with different tables to ensure a quick flow, we were able to guide them to the venue very smoothly without creating crowds of people blocking the reception and entrance. However, we regretted that the guidance to arrive to the venue was not very thorough. We've taken notes to improve that the next time📝 Interpretation Coordinators KTC has a lot of international team members, many of whom are more proficient in English than Japanese. Since this event would include key topics from management, we decided to have interpreters for the main segments. Performing simultaneous interpreting for two and a half hours worth of content is far beyond what an amateur could manage 🤦‍♀️ So we decided to enlist the help of a professional interpretation company that has long supported our orientations for this event as well. 🔻By enabling the Language Interpretation feature on Zoom, attendees can switch audio channels to hear the translated audio or the original version at will🔻 The interpreter listens to Japanese 👂 and speaks simultaneously in English 🗣️ on the English channel, so that the English channel broadcasts the English audio. You can learn how to set it up here 👉 Language Interpretation in meetings or webinars The coordinators maintained constant communication with the off-site interpreters via a separate channel to ensure there were no audio or video issues. Thanks to the interpreters, the management’s message was accurately conveyed to everyone. I can’t speak highly enough of professional interpreters and their skill️🙇‍♀️ 2023 Kanji of the Year We also presented this segment in 2022. Managers from each group would take the stage to present the kanji representing the year, summarize their group’s highlights, and share their outlook for the upcoming year. We asked the person in charge to compile the answers of the 22 groups in advance and reflect them in the presentation deck. Given the managers' busy schedules, we notified them in mid-December and set the deadline for January 19. ![kanji_announce](/assets/blog/authors/M.Mori/20240611/kanji_announce.png =500x) 🔻This is from the former Tech Blog Team (currently the Technical Public Relations Group). ![kanji_blog](/assets/blog/authors/M.Mori/20240611/kanji_blog.png =700x) 🔺 We asked everyone to summarized the contents of each group in Confluence and incorporate it into the materials like this!🔻[kanji_blog_ppt](/assets/blog/authors/M.Mori/20240611/kanji_blog_ppt.png =700x) It was interesting to see the colors of each group, and it was a rare opportunity to get to know what each group was doing and what they will do! K-1 Grand Prix team The highlight of this event to say the least. Every month, we give awards to outstanding projects and activities under the name of the "Kageyama Award"👉 Reference article: How We Bolstered All-Employee Meetings The purpose is to do a look back into our most highlighted initiatives, recognize the value of the work being done, and to share information across departments. For the annual award version, we decided to call it the K-1 Grand Prix. The general flow is shown in the figure below: We don’t make presentations for the monthly awards, but we did request them for this annual event. Presentation skills are also tested. Given the large number of groups, we asked each to submit a project and then selected one highlight project from each group. I participated in the Platform Division selection, and it was impressive to see a lively environment where team members from different groups could come together and praise each other . During the announcement of the qualifying rounds and throughout the event, we emphasized that the goal of the K-1GP is not to declare the best or worst but to celebrate all contributions. Of course, the underlying premise is that all the work everyone has done over the past year is excellent. The main purpose of the K-1GP was to reflect on our work and applaud each other's efforts, so at least in the qualifying rounds of the Platform Division I participated in, I was very happy to see this environment of praising each other . Projects selected in the qualifying rounds were required to prepare three-minute presentations during the week leading up to the meeting. It was a very tight schedule, and I am grateful to all the presenters🙇‍♀️‍ All the presentation materials we received were full of individuality, and I looked forward to seeing the new submissions each day. lol Greetings from the President and Vice President The messages from our President Kotera-san and Vice President Kageyama-san were also key parts of our agenda. This segment was very important because we rarely have opportunities to hear directly from them in our monthly general meetings. Specifically from Kotera-san, whom we typically only hear from at KINTO/KTC joint meetings. I believe that hearing a clear message from upper management helps align everyone’s efforts toward a common goal. Like providing a solid foundation. The organizing team discussed in advance our vision for KTC engineers, our goals for KTC in 2024, and what we wanted to hear from upper management. We then summarized these points into an overall structure for their review and revision. To make the slides easier to convey, our designers from the Creative Office helped us. They choose words that would not be misunderstood by our international members and complemented them with visuals. ![President_message](/assets/blog/authors/M.Mori/20240611/president_message.jpg =500x) _ Visualizing the President’s Message _ This time, Toyota's new vision (Inventing our path forward together) was announced in a timely manner, and was also shared again by the President. ![toyota_message](/assets/blog/authors/M.Mori/20240611/toyota_message.jpg =500x) Inventing our path forward together Social Gathering The best part of offline events is the opportunity for socializing. This time, we were able to customize the hamburgers with our logo through our catering services, making them look very upscale ✨ ![Logo_burger](/assets/blog/authors/M.Mori/20240611/logo_burger.jpg = 500x) Catering was set up in the foyer, and nothing was placed in the main venue, which made it a bit inconvenient to walk back and forth for food and drinks. We made a toast during the ‘kagami biraki’ ceremony. Since it was the first time for all of us organizers to prepare a kagami biraki, we did a lot of research and became a bit anxious upon reading that we needed a crowbar and a large cutter. However, we discovered a very convenient and unique barrel on the KURAND [^1] website that didn’t require any special tools, so we decided to order that one [^1]: Later, we collaborated with KURAND and cosponsored the event "Source Code Review" Festival organized by our company. ! [kagamibiraki] (/assets/blog/authors/M.Mori/20240611/kagamibiraki.jpg = 500x) Isn't it so cute?? This design was also created by of our Creative Office 💯 After the toast, everyone was basically free to eat and drink, but we are talking about 260 people. As organizers, our goal was make this an opportunity for people who don’t usually interact to engage in conversation. We looked at what we could do to get the conversations started. Should we start with playing games in teams? We were worried that there were too many people, and we didn't want to force participation... On our search for a good method to spark conversations, we discovered Rally : a service that helps you create stamp rallies easily with your smartphone. Rally can scan QR codes to mark virtual stamps in its app. By distributing QR codes by department and encouraging everyone to collect stamps from all departments, we could facilitate interaction among participants! It was a quick decision. We could customize the design extensively even with the free plan, and we were able to have it ready in a week. 🔻Our instructions to use Rally ![rally_slides](/assets/blog/authors/M.Mori/20240611/rally_slides.jpg = 700x) We affixed QR code stickers to the ID cases distributed per departments at the reception, allowing people to scan them and collect the virtual stamps on their smartphones. It was excellent in terms of ease of preparation and as a tool for communication. Overall, it worked very well. It was moving to see people from different departments talking to each other smoothly without feeling forced. ![rally_poster](/assets/blog/authors/M.Mori/20240611/rally_poster.jpg =500x) _ Poster displayed on the day_ Novelties Another important preparation not to be forgotten is the novelty items. Due to the tight schedule, we couldn’t find what we needed at first, and the Creative Office made a lot of things for us. K-1 GP logo Certificate of Recognition ![idcase](/assets/blog/authors/M.Mori/20240611/design_k1_logo.png =300x) ![award](/assets/blog/authors/M.Mori/20240611/design_award.jpg =300x) Slide Master Barrel design for the Kagami biraki ![slidemaster](/assets/blog/authors/M.Mori/20240611/slide_master.jpg =300x) ![sakadaru](/assets/blog/authors/M.Mori/20240611/design_sakadaru.png =300x) ID card case (distributed to everyone) Staff T-shirt ![idcase](/assets/blog/authors/M.Mori/20240611/design_idcase.jpg =300x) ![staff_shirts](/assets/blog/authors/M.Mori/20240611/design_staff_t.jpg =300x) Tumbler (stamp rally giveaway, for those who collected all stamps) Tote bag (stamp rally giveaway, for those who collected all stamps) ![tumbler](/assets/blog/authors/M.Mori/20240611/design_tumbler.jpg =300x) ![eco_bag](/assets/blog/authors/M.Mori/20240611/design_bag.jpg =300x) Even when I look at it again now I want to say “How much more are we making them make stuff, poor guys!” haha Additionally, our in-house engineers developed a tool to automate the creation of everyone’s name tags. 🔻The Slack icon, department, name, and KTC logo printed for everyone. ![Name_card](/assets/blog/authors/M.Mori/20240611/namecard.jpg = 300x) I casually mentioned, ‘It would be nice to have something like that,’ and they created it right away. I’m always impressed by the speed and quality of our colleagues' work. Once again, I’d like to express my deep appreciation to everyone who helped us out, despite their other responsibilities. 🙇‍♀️🙇‍♀️🙇‍♀️ What I learned, and going forward It’s been six months now, but looking back at what I wrote, I’m still impressed by how much preparation went into it... lol Reflecting on this article, I’m reminded of the importance of clearly communicating the organization’s vision and goals to the entire company and conducting effective team-building offline. By management communicating their vision and strategy directly, we can work toward the same goal on a daily basis, based on their thoughts and directions. Additionally, successfully helping people connect with these key ideas can also boost employee motivation. By addressing this offline, the direction will be more easily absorbed, helping to build trust between employees and management. It will also help in addressing questions and concerns among employees. It has been five years since the KINTO service started, and as a company, we are in the midst to our next stage. I realized that holding such an event at this time would lead to increased engagement and fostering a sense of unity throughout our organization ✨ We can share the results of the event in other articles, but there were many positive responses from participants, such as ‘I feel more motivated to work’, ‘I gained a better understanding of what other teams are doing’, or ‘I learned more about upper management’s perspectives’ 😄 We aim to make this event an annual tradition, using what we’ve learned from this year's operation to make further improvements for the next one💪 And here I am, having written almost 7,000 words before I knew it. That’s how you can see how attached I was to this event. Thank you for reading until the end! KINTO Technologies is planning a variety of events, both internally and externally, in the future! We are always posting our external events at Connpass , so please join us if you are interested 😄
アバター
Hello! I'm an organizational development coordinator in the Human Resources Group. After joining the company in January 2023, the first company-wide event I organized turned out to be a heartwarming experience, so I decided to write about it. This article covers an event held in February 2023. KTC #thanks Days ![KTC #thanks Days](/assets/blog/authors/hr-team/thanks-days/thanks-days.png =400x) This is the name of the event. Within the company, we refer to KINTO Technologies as KTC. ■Event Overview ・Date: February 13 - February 15, 2023 ・Locations: Muromachi Office, Jimbocho Office, Nagoya Office and Osaka Tech Lab ・Details: At each location, we set up a “Free Snack Bar” where employees could fill a cup with their favorite snacks to give as a gift. They attached a thank you card with a note of appreciation, and exchanged these snacks with their colleagues. Here’s what the filled snack cups looked like So cute... Why we held this event We had two main reasons: to enhance communication and to build a company culture of mutual appreciation. After joining the company, I spoke with various colleagues and discovered that many wanted better cross-team communication across different positions, roles, ages, and genders. We hoped that this event would be an opportunity for everyone to share the gratitude they hadn’t been able to express or had forgotten to convey on a daily basis. Our aim was to deepen communication and pave the way for further interactions. ●Why we focused on gratitude When I researched what kind of communication measures to use, I found that expressing gratitude to one another has a significant impact. ・It creates feelings of gratitude, kindness, and interest in others ・It makes you focus on the strengths and positive aspects of others ・It stimulates communication ・It also increases productivity (some data shows that happiness increases productivity by 12%, while unhappiness reduces productivity by 10%) ・It releases oxytocin, the happiness hormone, making you feel happy and more. We found that communication through expressing gratitude leads to higher quality conversations than without. ●The existence of the #thanks channel In addition, KTC had a wonderful Slack channel called "#thanks" that spontaneously emerged, with various "thanks" messages posted every day. However, only about 10% of employees were contributing, so we hoped to use KTC #thanks Days as an opportunity to increase channel usage. We aimed to use this event as a catalyst for creating a culture of daily gratitude. The actual event The Snack Bar was a huge success, with many people gathering every day! Each location ended up restocking snacks three times, which was a delightful outcome! It was impressive to see how much fun everyone was having while choosing their snacks. As for the Slack channel... Among the many cute posts, there was also this heartwarming one... ...? ? ? Someone posted "Ri-Ga-To-U"(“Thank you” in Japanese) to the #thanks channel and handed the remaining letter "A" to the graduating members as a farewell gift. What a wonderful gesture! It was truly a memorable and touching moment! Channel Promotion Results Did the number of users actually increase? #thanks subscribers: increased by 117% #thanks posts: 119 new posts Total reactions: over 2,000 Number of contributors: increased by 332% (from 19 in January to 63 in just three days) These were also impressive results!! Lastly As a result of implementing this event across all locations, we received a significant number of reactions on Slack and saw that many people enjoyed the three days. Additionally, since about 25% of KTC’s employees are from overseas, we realized that gratitude transcends language barriers. Even if someone couldn’t read the words, the feelings of appreciation were clearly communicated. Although there were some words I couldn't catch, there were many instances where I could tell I was being appreciated. Each time, I mentally translated it to "Thank you for the amazing initiative!"which greatly boosted my self-esteem. <In fact, the POP display had "Thank you" written in all the languages of our employees’ countries.> This experience made me realize how fundamentally important gratitude is, and how transforming it into words and actions can be so profoundly impactful. Gratitude transcends borders. By being receptive to the actions of others, feeling a sense of appreciation, and expressing it, we aim to make these practices a natural part of our company culture. I am committed to helping build this habits at KTC and strengthening our organization. Thank you so much for allowing me to take on such a wonderful project shortly after joining the company. 감사합니다!
アバター
Introduction Hello, I'm Tada from the SCoE Group at KINTO Technologies (from now on referred to as, KTC). The term SCoE, which stands for Security Center of Excellence, might still be unfamiliar to some. At KTC, we reorganized our CCoE team into the SCoE Group this past April. In this blog, I would like to share the background and mission behind our new SCoE organization. For more information on the activities of our CCoE team, please refer to the previous articles if you are interested. Background and Challenges To explain how the SCoE group was founded, it is important to first understand its predecessor, the CCoE team. The CCoE team was established in September 2022. Since I joined KTC in July 2022, so it was formed shortly after I started. At the time of its establishment, our CCoE had two main objectives: Using cloud technology Ensuring continuous efficient development through common services, templates, knowledge sharing, and human resource development. Regulating the use of cloud services Allowing the use of cloud resources with proper policies to maintain a secure state at all times. The CCoE team engaged in various activities based on these two dimensions: Utilization and Regulation. However, since other teams within the same group had already been central to cloud utilization before the inception of the CCoE team, the CCoE's main focus shifted primarily to Governance. Regarding the Regulation aspect, as mentioned in a [previous article](https://blog.kinto-technologies.com/posts/2023-06-22-whats-ccoe-and-security-preset-gcp/), we mainly carried out the following activities: Creating standardized cloud security guidelines Providing pre-configured secure cloud environments Conducting cloud security monitoring and improvement activities Particularly in the area of monitoring and improvement activities, the team checked for deficiencies in the cloud environments used and configured by the product side, identified risky settings and operations, and, if any issues were found, requested and supported the product teams in implementing improvements. However, each product organization had a different approach to security and the level of awareness of it differed, so in some cases security was given a low priority and improvements did not progress. On the other hand, looking across KTC, there were multiple organizations covering the security aspect of each area. In addition to the organizations covering the security of back-office and production environments, there were three separate entities, including the CCoE team, covering cloud security. SOC operations were also conducted independently by each organization, which caused delays in forming company-wide security measures and made it difficult for product teams to identify the correct point of contact for security-related inquiries. At a company-wide level, the Security Group, which covered the security of product environments, played a central role. The CCoE team acted as a bridge between the Security Group and the product teams, carrying out the cloud security monitoring and improvement activities. Establishment of the SCoE Group The SCoE Group was established in response to the context described above to address the following challenges: To promote cloud security improvement activities To unify security-related organizations within KTC When it comes to the second point, consolidating three separate entities into a single department (the IT/IS Division) has enabled more efficient and rapid operations. As for the first point, the promotion of cloud security improvement activities, it was taken within the IT/IS Division as well along with the security topics, strengthening the company’s overall approach to security efforts. Previously, CCoE activities were conducted as one team within the Platform Group. However, now that the department’s name included the word Security, our commitment to it has increased. The change from Cloud CoE to Security CoE not only enhanced our focus on cloud security but also strengthened the organization's security functions and emphasized our dedication to cloud security. Being part of the same division as the Security Group allows us to implement security improvement activities more quickly. While there was some regret about the CCoE's dissolution after a year and a half, we accepted the change because the CCoE's main focus was on governance. Although the formal organization has been dissolved, the activities of CCoE continue as a virtual organization across the entire company. SCoE Group’s Mission With the establishment of the SCoE Group, the mission has been defined as follows: To implement monitoring guardrails and take corrective actions in real time The term “guardrails” here refers not only to preventive or detective measures but also to configurations and attacks that pose security risks. Given the current state of cloud security, many incidents occur due to cloud configuration issues, and the time between identifying a posture flaw and experiencing an actual incident is rapidly decreasing. Therefore, we believe that the mission of SCoE is to quickly respond to security risks as they arise and to ensure we are well-prepared in advance to handle such situations effectively. Specific activities of the SCoE Group To achieve our mission, the SCoE Group undertakes the following activities: Prevent security risks Continuously monitor and analyze security risks Respond swiftly to security risks To prevent security risks, we continue to create standardized cloud security guidelines and providing pre-configured secure cloud environments, a practice carried over from our CCoE days. While our focus has primarily been on AWS, we are now expanding our efforts to include Google Cloud and Azure. To ensure these practices are well integrated within the company, we also conduct regular training sessions and workshops. In terms of "Continuously monitor and analyze security risks," we have primarily focused on CSPM (Cloud Security Posture Management) and SOC. However, we are now expanding our activities to include CWPP (Cloud Workload Protection Platform) and CIEM (Cloud Infrastructure Entitlement Management). Additionally, we have started the process of consolidating SOC operations, which were previously conducted separately by three different organizations, into a single unified operation. In terms of what we do to respond swiftly to security risks, we have started exploring the automation of configurations, scripting, and the use of generative AI. We believe that in the future, it will be difficult to maintain a secure environment in the field of cloud security without utilizing generative AI, and we are actively considering its use. Summary At KINTO Technologies, we have restructured the CCoE team into the SCoE Group. This restructuring aims to enhance our focus on cloud security in a more specialized manner by continuing the Regulation activities previously undertaken by the CCoE. Moving forward, the SCoE Group will play a key role in leading the evolution of our cloud security. As cloud technology advances and cloud security becomes increasingly complex, we aim to minimize its security risks and ensure the delivery of safe and reliable services. We are committed to providing the essential support needed to achieve this. Thank you for reading until the end. Closing words The SCoE Group is looking for new team members to work with us. Whether you have practical experience in cloud security or are simply interested and eager to learn, we encourage you to get in touch. Please feel free to contact us. For more details, please check here
アバター
はじめに 初めまして。KINTO ONE開発部の新車サブスク開発グループでフロントエンド開発を担当しているITOYUです。 今、Webアプリケーションを作成する際はVue.js、React、Angularなどのフレームワークを使うことが一般的です。新車サブスク開発GでもReact、Next.jsを使って開発を行っています。 やれReactのver.19がリリースされた、やれNext.jsのVer.15がリリースされたというように、ライブラリやフレームワークのバージョンアップが頻繁に行われています。そのたびに更新された機能や変更点のキャッチアップを行い、知識をアップデートする必要があります。 そして昨今のフロントエンドの進化は目覚ましいものがあります。数ヶ月前まで使っていたライブラリやフレームワークが、数ヶ月後には旧式となり、新しいライブラリやフレームワークが登場することも珍しくありません。 このような状況下で、フロントエンド開発者は常に新しい技術やライブラリ、フレームワークに対してアンテナを張り、情報収集を行い、学習を続ける必要があります。 これはフロントエンド開発者の定めであり、フロントエンド開発者にとっての楽しみでもあります。 熱い情熱と飽くなき好奇心を持つフロントエンド開発者は、新しい技術やライブラリ、フレームワークを学び使いこなすことで、自分のスキルを向上させ、より良いWebアプリケーションを効率的に開発しベストプラクティスを追求し、 フロントエンドの達人 を目指しています。 しかしフロントエンドにおけるライブラリやフレームワークの根底にはJavaScriptがあります。果たして私たちはJavaScriptを100%理解し、使いこなしているのでしょうか。 JavaScriptの機能を使いこなせていないのに、ライブラリやフレームワークを使いこなすことができるのでしょうか。 フロントエンドの達人と呼べるのでしょうか。 かくいう私もその問いかけに対して、自信を持って「はい」と答えることができません。 ということで、フロントエンドの達人を目指すべく、JavaScriptの学び直しを行い、不足している知識を補うことを決意しました。 この記事の目的 学び始めの第一歩として、JavaScriptの基本的な概念である スコープ について学び、理解を深めることを目的としています。 あまりにも初歩すぎるだろ!と思われるかもしれません。きっと大抵のフロントエンドエンジニアの皆さんは、スコープとは何か、といちいち考えることなく、当たり前のように使いこなしていることでしょう。 ですがスコープの概念や関連する知識や名称を言語化するとなると、意外と難しいものです。 この記事では、スコープの概念を理解するために、スコープの種類について理解を深めることを目的としています。 この記事を読み終わった後に、新しい実装方法が身に付くといったことは無いでしょう。ですが、スコープの概念を理解することで、JavaScriptの挙動を理解し、より良いコードを書くための基礎を築くことができるでしょう。 :::message この記事で記載されているJavaScriptのコードや概念は、ブラウザ上での動作を前提として解説しています。 Node.jsなどの環境によっては、挙動が異なる場合がありますので、ご注意ください。 ::: スコープ JavaScriptではスコープという概念があります。スコープとは 実行中のコードから参照できる変数や関数の範囲 のことです。 まずは以下のスコープの種類について見ていきましょう。 グローバルスコープ(global scope) 関数スコープ(function scope) ブロックスコープ(block scope) モジュールスコープ(module scope) グローバルスコープ グローバルスコープとは、プログラムのどこからでも参照できるスコープのことです。 変数や関数にグローバルスコープを持たせる方法は大まかに以下の通りです。 グローバルオブジェクトのプロパティに追加された変数 スクリプトスコープを持つ変数 グローバルオブジェクトのプロパティに追加された変数 グローバルオブジェクトのプロパティに変数や関数を追加することで、グローバルスコープを持たせることができます。 環境によってグローバルオブジェクトは異なりますが、ブラウザ環境ではwindowオブジェクト、Node.js環境ではglobalオブジェクトがグローバルオブジェクトになります。 今回の例ではブラウザ環境を想定して、windowオブジェクトにプロパティを追加する方法を紹介します。 その方法とは、varで変数や関数を宣言することです。varで宣言された変数や関数はグローバルオブジェクトのプロパティとして追加され、どこからでも参照できるようになります。 // windowオブジェクトのプロパティに追加された変数 var name = 'KINTO'; console.log(window.name); // KINTO また、グローバルオブジェクトに追加された変数を呼ぶ際、windowオブジェクトを省略することもできます。 // windowオブジェクトを省略した変数の呼び出し var name = 'KINTO'; console.log(name); // KINTO スクリプトスコープを持つ変数 スクリプトスコープとは、JavaScriptファイルのトップレベル、もしくはscript要素のトップレベルで宣言された変数や関数が参照できるスコープのことです。 トップレベルでlet,constで宣言された変数や関数はスクリプトスコープを持ちます。 <!-- スクリプトスコープを持つ変数 --> <script> let name = 'KINTO'; const company = 'KINTOテクノロジーズ株式会社'; console.log(name); // KINTO console.log(company); // KINTOテクノロジーズ株式会社 </script> トップレベル トップレベルとは、関数やブロックの外側のことを指します。 これだけだどトップレベルの説明がわかりにくいかもしれません。以下の例でトップレベルで宣言されている変数と、そうでない変数の違いを見てみましょう。 <!-- トップレベルで宣言された変数 --> <script> let name = 'KINTO'; const company = 'KINTOテクノロジーズ株式会社'; console.log(name); // KINTO console.log(company); // KINTOテクノロジーズ株式会社 </script> <!-- トップレベルで宣言されていない変数 --> <script> const getCompany = function() { const name = 'KINTO'; console.log(name); // KINTO return name; } console.log(name); // ReferenceError: name is not defined if (true) { const company = 'KINTOテクノロジーズ株式会社'; console.log(company); // KINTOテクノロジーズ株式会社 } console.log(company); // ReferenceError: company is not defined </script> 上記のコードだと、 getCompany 関数内で宣言された name 変数と、 if 文内で宣言された company 変数は、関数の中やif文のブロックの中でのみ参照できます。 グローバルオブジェクトとスクリプトスコープの違い トップレベルでlet,constで宣言された変数は、varで宣言された変数と同様にグローバルスコープを持ち、どこからでも参照できるようになります。 しかし、let,constで宣言された変数はvarで宣言された変数と異なり、グローバルオブジェクトのプロパティには追加されません。 // let,constで宣言された変数はグローバルオブジェクトのプロパティには追加されない let name = 'KINTO'; const company = 'KINTOテクノロジーズ株式会社'; console.log(window.name); // undefined console.log(window.company); // undefined :::message グローバルオブジェクトの扱いは慎重に varを使ってグローバルオブジェクトのプロパティに変数や関数を追加する方法は、グローバルオブジェクトの汚染を招くため、避けるべきです。 その理由として、異なるスクリプト間で変数や関数の名前が重複すると、予期せぬ挙動を引き起こす可能性があるためです。 なのでグローバルスコープを持たせたい場合は、let,constで宣言された変数を使うことが推奨されています。 ::: 関数スコープ 先ほどのスクリプトスコープを持たない変数の例の中で登場しましたが、関数に囲まれた波括弧{}内で宣言された変数や関数は、その関数内でのみ参照出来ます。これを 関数スコープ といいます。 const getCompany = function() { const name = 'KINTO'; console.log(name); // KINTO return name; } console.log(name); // ReferenceError: name is not defined name変数は関数の中で宣言されているため、getCompany関数の中でのみ参照できます。なので関数の外からname変数を参照しようとするとエラーが発生します。 ブロックスコープ こちらも先ほどのスクリプトスコープを持たない変数の例の中で登場しましたが、波括弧{}で囲まれた範囲内で宣言された変数や関数は、そのブロック内でのみ参照できます。これを ブロックスコープ といいます。 if (true) { let name = 'KINTO'; const company = 'KINTOテクノロジーズ株式会社'; console.log(name); // KINTO console.log(company); // KINTOテクノロジーズ株式会社 } console.log(name); // ReferenceError: name is not defined console.log(company); // ReferenceError: company is not defined このようにletとconstで宣言された変数はブロックスコープになり、波括弧{}内で宣言された変数は波括弧{}内でのみ参照できます。 :::message 関数宣言とブロックスコープ 関数宣言をブロック内で行うと、関数宣言はブロックスコープを持たないため、関数はスコープ外からも参照できます。 ※JavaScriptのバージョンや実行環境によって結果が異なる場合があります。 if (true) { function greet() { console.log('Hello, KINTO'); } greet(); // Hello, KINTO } greet(); // Hello, KINTO なので関数に対してブロックスコープを持たせたい場合は、ブロックスコープを持つ変数宣言を利用して関数を代入する方法を使うことが推奨されています。 if (true) { const greet = function() { console.log('Hello, KINTO'); } greet(); // Hello, KINTO } greet(); // ReferenceError: greet is not defined ::: モジュールスコープ モジュールスコープとは、モジュール内で宣言された変数や関数が参照できるスコープのことです。これにより、モジュール内の変数や関数は、そのモジュール内でのみアクセス可能となり、外部からは直接参照することができません。 モジュール内で宣言された変数や関数を外部から参照するためには、 export を使って外部に公開し、 import を使ってその変数や関数を利用するファイルに取り込む必要があります。 例えば、 module.js というファイルに以下のように変数を宣言します。 // module.js export const name = 'KINTO'; export const company = 'KINTOテクノロジーズ株式会社'; const category = 'サブスクリプションサービス'; // この変数はexportされていないため、外部からは参照できません。 exportされた変数は、別のファイルでimportすることで参照することができます。 // モジュールスコープを持つ変数の呼び出し import { name, company } from './module.js'; console.log(name); // 出力: KINTO console.log(company); // 出力: KINTOテクノロジーズ株式会社 // `category`はexportされていないため、この行はエラーを引き起こします。 console.log(category); // ReferenceError: category is not defined exportされていない変数は、外部から参照しようとするとエラーが発生します。これは、モジュールスコープがその変数を外部から隠蔽しているためです。 // モジュールスコープを持たない変数の呼び出し import { category } from './module.js'; // SyntaxError: The requested module './module.js' does not provide an export named 'category' console.log(category); // importが失敗するため、この行は実行されません。 このように、モジュールスコープを理解することは、JavaScriptでのモジュール間の依存関係を管理する上で非常に重要です。 まとめ スコープとは実行中のコードから参照できる変数や関数の範囲のこと グローバルスコープとは、どこからでも参照できるスコープのこと スクリプトスコープとは、JavaScriptファイルのトップレベル、もしくはscript要素のトップレベルで宣言された変数や関数が参照できるスコープのこと 関数スコープとは、関数に囲まれた波括弧{}内で宣言された変数や関数が参照できるスコープのこと ブロックスコープとは、波括弧{}で囲まれた範囲内で宣言された変数や関数が参照できるスコープのこと モジュールスコープとは、モジュール内でのみ参照できるスコープのこと 今回はJavaScriptにおけるスコープの種類について学びました。次回はスコープに関連する知識について紹介します。
アバター
Introduction Hello. I am Nakaguchi from KINTO Technologies, Mobile App Development Group. I work on developing KINTO Easy Application App and also organize study sessions and events for the iOS team. Eight members of our iOS team attended try! Swift Tokyo 2024 , which was held from March 22 to 24, 2024. Later, as part of our study sessions, we held LTs (lightning talks) to reflect on our experiences. Out of the eight participants, five gave presentations through LT, while the remaining three published articles on the KTC Tech Blog. Here are their blog posts: Recap of Try! Swift Tokyo 2024 Trying! Swift Community in 2024 One more article will be published soon!! LT event details Usually, our team study sessions are conducted solely within the iOS team, but today we had guests including members from the "Manabi no Michi no Eki (roadside station for learning)", (more information here) , who support company-wide study sessions, as well as members from the Android team. With over 20 participants, it was a very lively event. Here is the online venue! Everyone has a lovely smile! 😀 Here's the offline venue! Due to the rain or possibly hay fever, many people were working from home that day, so the turnout was a bit low. However, everyone who came was smiling happily! 😀 Additionally, we set up a dedicated thread on Slack during the iOS team’s study session, and everyone enthusiastically engaged in the discussion. It was a huge success, with over 150 comments in just one hour! First speaker: Mori-san Mori-san shared a wide range of impressions about the sessions they attended! It was also memorable that Mori-san expressed gratitude to the event staff and simultaneous interpreters. I got the impression that Mori-san already has a deep understanding of SwiftUI and TCA, which are used in their work. This year's try! Swift had many sessions that delved deeper into the basics, which likely helped deepen their knowledge. Here is a video of Mori-san's presentation! 2nd speaker: Hinomori-san ( ヒロヤ@お腹すいた ) Hinomori-san was involved as a staff member for three days and shared many behind-the-scenes stories with us! You can also check out his blog article here ! It turns out that much of the setup around were actually done by Hinomori-san. Over the three days, I saw Hinomori-san working as a staff member many times and seemed to be very busy. The scene during the closing on the second day, where all the organizers, speakers, and staff gathered on stage, was very moving, and Hinomori-san stood out among them. Here is Hinomori-san's presentation! 3rd speaker: Nakaguchi This will be my LT. This year, I want to focus on catching up with visionOS, so in my LT, I also talked about “Creating a visionOS app with Swift” (Day 1) and “How to start developing spatial apps unique to Apple Vision Pro” (Day 3). I haven’t had the chance to develop for visionOS in my work or private projects yet (and of course, I don’t have the actual device), but my desire to work with visionOS has increased tremendously! Here is my presentation! 4th speaker: Ryomm-san Ryomm-san had already released a participation report on Zenn, and it was presented during the LT. (Released on 23 rd March 👀...so fast!!) I participated in try! Swift Tokyo 2024! https://zenn.dev/ryomm/articles/e1683c1769e259 Ryomm-san provided an overall recap of the sessions, as well as reflections on the sponsor booths and the after-party. With amazing communication skills, Ryomm-san exchanged information with many people, including the speakers! According to Ryomm-san, the trick to starting a conversation with the person next to you is courage and a friendly "Hey there!"!! In these kinds of events, probably everyone wants to talk to someone, so don’t hesitate to strike up a conversation. We should all follow Ryomm-san’s example 😭 Here is Ryomm-san's presentation! 5th speaker: Goseo-san Goseo-san shared their impressions of the session “How to build a sense for designing good applications” (Day 1)! They actually tried out the source code which was introduced during the session and shared their thoughts on implementing it with SwiftUI. It was enlightening to learn that animations in SwiftUI still have some quirks. Here is Goseo-san's presentation (held on a different day, later) Conclusion It was my first time participating try! Swift, which was held for the first time in five years. I usually only attend conferences online, so this was my first offline experience. It was incredibly educational and a valuable experience. In the future, I would like to get more involved by participating as a sponsor or staff member. I believe it was a great initiative for our team to turn our participation in try! Swift into tangible outputs, such as LT events and blog writing. I hope we can continue these kinds of activities at future large conferences like try! Swift and iOSDC.
アバター
Introduction Hello! I am TKG from the Corporate IT Group at KINTO Technologies (KTC). As a corporate engineer, I usually manage the Service Desk and Onboarding Operations. The other day, I presented the "Study Session in the Format of Case Presentations + Roundtable Discussions, Specialized in the Corporate IT Domain" at the event **“KINTO Technologies MeetUp! 4 case studies for information systems shared by information systems” ** " This time, I would like to introduce the content of the case study presentation from that study session, along with some additional information! First, the presentation materials: The presentation materials are stored on Speaker Deck. The story of how the help desks of KINTO and KINTO Technologies have collaborated (and are continuing to collaborate) - Speaker Deck Choosing the theme Currently, I hold positions in both KTC and KINTO, and I am in charge of the help desk area in both companies. When I was thinking about what to present, I realized that there aren’t many case studies on how close companies collaborate with each other. So, I chose this as my theme. To be honest, I had some doubts about whether it was worth presenting since it wasn’t something particularly "flashy”. However, I motivated myself by thinking that these not particularly glamorous topics are exactly the ones that should be shared, and I went ahead to prepare the content. About KINTO and KTC As this story is about the collaboration between KINTO and KTC, I thought it was important to first explain the relationship between the two. I have always found it to be quite unclear, both before and after I joined, so I would like to explain their relationship from my point of view. They are sibling companies rather than subsidiaries, and there's a common misconception that KTC only develops for KINTO. In reality, we also develop for our parent company, Toyota Financial Services (TFS), and create apps for end users, such as my route and Prism Japan. The IT environments of the two companies are quite different. You can see in the simplified chaos map above that KINTO appears to be fully cloud-based, but its core systems operate on-premises within the internal network. On the other hand, KTC does not have an internal network at all. Each office operates independently. Our Muromachi Office has bases on the 7th and 16th floors, but each operates independently. The only on-premises equipment consists of the network devices and multifunction devices at each location. This is the structure of the IT departments of both companies. While KINTO is divided into two sections, Service Desk (Help Desk) and Infrastructure Management (IT Support), KTC is divided into five. What I will be discussing today is the Service Desk at KINTO that I am in charge of, what it would be the "Tech Service" at KTC. Both departments handle help desk operations. The roles of the various organizations within KTC are extensive enough to require multiple articles, so I will omit them here. This concludes the explanation of the relationship between KINTO and KTC. Episode 1. The story of implementing Jira Service Management (JSM) as the Inquiry Desk for both companies At KTC, we were using Jira Software (Jira) to handle inquiries. Initially, it worked well, but as the number of employees increased, issues started to arise with the existing Jira setup. The problem was that the tickets were only written in free text, which created a burden for both the submitters and the help desk. Additionally, there were instances where the help desk couldn’t check the status or handle sensitive content (since the inquiry desk’s Jira was accessible to all employees). We decided to implement a dedicated ITSM (IT Service Management) tool to allow our staff to focus on their engineering tasks (their primary responsibilities) without customizing Jira to potentially resolved these issues. Although we wanted to compare various tools, we had limited time. Given that Atlassian products were already used within the company, we chose Jira Service Management (JSM) for its compatibility. An additional advantage was that 10 licenses were available for free for one year, making it easy to test and evaluate the tool. Initially, the plan was to implement JSM only at KTC, but as we continued collaborating between KINTO and KTC, we became aware of the issues at KINTO as well, so we decided to cooperate together. The implementation started with KINTO. We first established implementation and operational experience at KINTO, following the concept of "Winning Quickly", and then leveraged that experience to roll it out at KTC. There were no major concerns during the implementation at KINTO, but several concerns arose when it came to implementing at KTC. Some of the specific concerns that emerged at that time were as follows: Q1. Won’t it require more effort if we cannot refer to other requests when issuing accounts, changing permissions, etc.? A. With JSM, it is possible to create optimized forms for each type of service request, eliminating the need to refer to other requests Q2. (Since everyone can make requests) Will service requests occur without the manager’s approval? A. While such requests may occur, the help desk will coordinated with the manager as needed Additionally, I recently had the opportunity to gather feedback from managers of various departments regarding the implementation of JSM. They said that the concerns they had previously did not turn out to be negative, and it has become much easier to track their requests. They evaluated it as a significant improvement compared to the previous system. Initially, our focus was on implementation. That was the situation. We have been continuously optimizing the inquiry forms, removing “unnecessary fields that turned out to be redundant after use”, and creating batch request forms to streamline processes. One of our top priorities has been the "expansion of the knowledge base". However, through our analysis of inquiries, we found that the proportion of service requests was higher than incident-related inquiries, which particularly require a knowledge base. This likely stems from the fact that KTC is a group of technical professionals with high IT literacy. Therefore, the focus has shifted more towards service requests, which cannot be resolved by the users themselves (i.e., only administrators can handle), rather than incident-related issues that users can resolve on their own. Currently, we are focusing on reducing the number of service requests and improving the speed of processing them. Episode 2. The story about how KINTO used KTC’s expertise to reduce costs and improve the PC replacement process (and continuing to do so) At KTC, we generally outsource the kitting process. However, since there are instances where onboarding happens suddenly, we have been working on automating the kitting process using MDM (mobile device management). At peak times, more than 20 people join in a month! For more details on this efficiency improvement, please refer to the presentation material below (in Japanese): The benefits of automating Windows Kitting - Speaker Deck On the other hand, at KINTO, we had vendors perform initial kitting from image deployment, and then installed individual applications. Although we had already been using Intune for settings, there was no specific trigger to push for further efficiency. At that time, we embarked on a large-scale PC replacement project at KINTO, which gave us the opportunity to collaborate more closely with KTC to streamline the process. By collaborating between KINTO and KTC and reviewing past documents, we were able to eliminate parts that previously required manual work and replace manual settings with Intune. As a result, we no longer needed to request vendors for image creation, achieving greater efficiency. While we have made progress in streamlining processes, we believe there is still room for improvement. Due to the different environments compared to KTC, reaching a "zero-touch" setup seems quite distant, but we would like to improve it little by little and move towards "little-touch" setup. In conclusion: Never forget to appreciate our predecessors Both KINTO and KTC have only been around for a few years since their founding, and they had to quickly establish the environments. There is no doubt that the people at that time made the best choices during the chaos of starting up, and they laid the foundations step by step. Within the changing environment, the case we discussed is an example of how we were able to successfully improve things when given the right opportunity. KINTO and KTC still have many areas that are not fully optimized, and there is a lot of room for improvement in both companies. If you are someone who is eager to take on this challenge, please join us! Together, let’s enhance the IT environments of KINTO and KTC, creating a space where staff can perform at their best without spending time on tasks other than engineering!
アバター
My name is Ryomm and I work at KINTO Technologies. I am developing the app my route (iOS). Today I will explain how to create a reference image for Snapshot Testing in any directory. Conclusion verifySnapshot(of:as:named:record:snapshotDirectory:timeout:file:testName:line:) You can specify the directory if you use this method. Background Recently, I wrote an article about introducing Snapshot Testing. However, after running it for a while, the number of test files has increased significantly, making it very difficult to find the specific test file I need. ![Large number of SnapshotTesting files](/assets/blog/authors/ryomm/2024-04-26/01-yabatanien.png =150x) Large number of Snapshot Test files So I decided to organize the Snapshot Testing files into appropriate subdirectories, but the method assertSnapshots(of:as:record:timeout:file:testName:line:) in the Snapshot Testing library pointfreeco/swift-snapshot-testing does not allow specifying the location for creating reference images. The existing directory structure related to Snapshot Testing looks as follows: App/ └── AppTests/ └── Snapshot/ ├── TestVC1.swift ├── TestVC2.swift │ └── __Snapshots__/ ├── TestVC1/ │ └── Reference.png └── TestVC2/ └── Reference.png When test files are moved to a subdirectory, the method mentioned above creates a directry __Snapshots__ within that subdirectory. Inside this directory, it creates a directory with the same name as the test file which contains the reference images. App/ └── AppTests/ └── Snapshot/ ├── TestVC1/ │ ├── TestVC1.swift │ └── __Snapshots__/ │ └── Reference.png ← Created here 😕 │ └── TestVC2/ ├── TestVC2.swift └── __Snapshots__/ └── Reference.png ← Created here 😕 As part of the existing CI system, the entire directory App/AppTests/Snapshot/__Snapshots__/ is mirrored to S3, so I do not want to change the location of the reference images. The target directory structure is as follows: App/ └── AppTests/ └── Snapshot/ ├── TestVC1/ │ └── TestVC1.swift ├── TestVC2/ │ └── TestVC2.swift │ └── __Snapshots__/ ← I want to put reference images here 😣 ├── TestVC1/ │ └── Reference.png └── TestVC2/ └── Reference.png Specify the Directory for Reference Images and Run a Snapshot Test verifySnapshot(of:as:named:record:snapshotDirectory:timeout:file:testName:line:) By using the method, you can specify the directory. The three methods provided in Snapshot Testing have the following relationships: public func assertSnapshots<Value, Format>( Matching value: @autoclosure () throws -> Value, As strategies: [String: Snapshotting<Value, Format>], record recording: Bool = false, timeout: TimeInterval = 5, file: StaticString = #file, testName: String = #function, line: UInt = #line ) { ... } ↓Execute forEach on the comparison formats passed to as strategies public func assertSnapshot<Value, Format>( Matching value: @autoclosure () throws -> Value, As snapshotting: Snapshotting<Value, Format>, Named name: String? = nil, record recording: Bool = false, timeout: TimeInterval = 5, file: StaticString = #file, testName: String = #function, line: UInt = #line ) { ... } Run the following and use the returned values to perform the test. verifySnapshot(of:as:named:record:snapshotDirectory:timeout:file:testName:line:) You can check the actual code here . In other words, as long as the same thing is done internally, it is perfectly fine to use verifySnapshot(of:as:named:record:snapshotDirectory:timeout:file:testName:line:) directly! Boom! extension XCTestCase { var precision: Float { 0.985 } func testSnapshot(vc: UIViewController, record: Bool = false, file: StaticString, function: String, line: UInt) { assert(UIDevice.current.name == "iPhone 15", "Please run the test by iPhone 15") SnapshotConfig.allCases.forEach { let failure = verifySnapshot( Matching: vc, as: .image(on: $0.viewImageConfig, precision: precision), record: record, snapshotDirectory: "Any path", file: file, testName: function + $0.rawValue, line: line) guard let message = failure else { return } XCTFail(message, file: file, line: line) } } } For our app my route , I initially passed only a single value to strategies , so I omitted the looping process with strategies . Now, although I was able to specify the directory, to follow the existing Snapshot Testing pattern, I want to create a directory based on the test file name and place the reference images inside it. The path passed to verifySnapshot(of:as:named:record:snapshotDirectory:timeout:file:testName:line:) needs to be an absolute path, and since the development environment varies among team members, it is necessary to generate the path according to each environment. Although the code turned out to be quite straightforward and cute, I implemented it as follows. extension XCTestCase { var precision: Float { 0.985 } private func getDirectoryPath(from file: StaticString) -> String { let fileUrl = URL(fileURLWithPath: "\(file)", isDirectory: false) let fileName = fileUrl.deletingPathExtension().lastPathComponent var separatedPath = fileUrl.pathComponents.dropFirst() // Here it becomes a [String]? template // Delete the path after the Snapshot folder let targetIndex = separatedPath.firstIndex(where: { $0 == "Snapshot" })! separatedPath.removeSubrange(targetIndex+1...separatedPath.count) let snapshotPath = separatedPath.joined(separator: "/") // Since we pass it as a String to verifySnapshot, I will write it as a String without converting it back to a URL. return "/\(snapshotPath)/__Snapshots__/\(fileName)" } func testSnapshot(vc: UIViewController, record: Bool = false, file: StaticString, function: String, line: UInt) { assert(UIDevice.current.name == "iPhone 15", "Please run the test by iPhone 15") SnapshotConfig.allCases.forEach { let failure = verifySnapshot( matching: vc, as: .image(on: $0.viewImageConfig, precision: precision), record: record, snapshotDirectory: getDirectoryPath(from: file), file: file, testName: function + $0.rawValue, line: line) guard let message = failure else { return } XCTFail(message, file: file, line: line) } } } This way, we can keep the reference images in their original location, while organizing the Snapshot Testing into subdirectories. This resolves the inconvenience of not being able to find the files when you want to update a Snapshot Test. There is still room for improvement, so I aim to make our development experience even more enjoyable ♪
アバター
はじめに こんにちは。 KINTOテクノロジーズ モバイルアプリ開発グループの中口です。 iOSチームのチームリーダーとしてこれまでにチームビルディングに関する記事を公開しておりますので、ご興味あればぜひご一読ください。 振り返り会がマンネリ化したのでプロファシリテーターを呼んでみた 180度フィードバックとってもおすすめです! 先日、 【たぶん世界最速開催!『アジャイルチームによる目標づくりガイドブック』ABD読書会】 こちらのイベントに参加してきました。 このイベントの参加目的は、主に以下の3点です。 アクティブ・ブック・ダイアローグ®(以下「ABD」という。)を体験してみたかった。 イベントで扱う本である 「アジャイルチームによる目標づくりガイドブック」 に興味があった。 著者である「小田中 育生(おだなか いくお)」さんにお会いしてみたかった。 その中でも、ABDという読書法は初めての経験で非常ためになるものでした。この読書法をもっと多くの方に知ってもらいたいと思ったので本記事ではABDに関する内容を中心に紹介させていただきます。 諸注意 本記事で掲載する人物や資料は、全て開催者様及びご本人様より掲載許可をいただいております。 イベントについて こちらのイベントは2024/07/10(水)に開催されたイベントで、『アジャイルチームによる目標づくりガイドブック』を「刊行前に著者と会えるABD読書会」として開催されました。 募集ページが公開され、その日中に応募枠の15名を突破してしまう人気イベントでして、参加できたことが非常に幸運だったと思います。 イベントのことを紹介してくれた弊社コーポレートITグループの きんちゃん にはとても感謝です!! 本について 本の内容については、実際に読んでもらえればと思いますのでここでは多くは語りませんが、イベントのオープニングでいくおさんが紹介されていた内容を共有させていただきます。 世の中的に目標設定があまり好まれていない傾向があるように感じる。 しかし、みんなが真剣に目標に向き合いそれを達成できるようになれば世界は良くなっていくと思う。 だから、いい目標を作れることはとても大事である。 一方で、目標を作ることも大事だが、それをいかに達成していくかはもっと大事である。 この本では、目標作りに関しては初めの2割程度で、 残りは目標を達成する方法をアジャイルの要素を取り入れつつ紹介する本となっている。 また、目標とセットで語られることの多い人事評価については書いていないが、 8名の方にコラムを書いていただいており、その中で評価の部分も良い感じに補完されているので、 コラムもぜひ読んでほしい! いくおさんによるオープニングの様子 いくおさんについて いくおさんとは、これまで面識は無いのですが、下記のLTや記事を拝見して存じておりました。 『Keeper of the Seven Keys Four Keysとあと3つ 』 こんなエンジニアリングマネージャだから仕事がしやすいんだなぁと思う10個のこと 誇り高き「マネージャー」を全うするために。“理想のEM”小田中氏を支えた珠玉の5冊 開発生産性やエンジニアリングマネージャーに関する考え方、及び読書に対する向き合い方など、とても参考になる部分が多く、ぜひ一度お会いしてお話ししてみたいと思っていました。 しかし当日は簡単な挨拶はさせていただいたものの、しっかりとお話できる時間を作ることができませんでした。 非常に残念でしたが、今後の機会に期待したいと思います。 ABDについて こちら ABDの公式サイト より引用させていただきます。 ABDとは何か? 開発者:竹ノ内 壮太郎さんによる説明 ABDは、読書が苦手な人も、本が大好きな人も、 短時間で読みたい本を読むことができる全く新しい読書手法です。 1冊の本を分担して読んでまとめる、発表・共有化する、気づきを深める対話をするというプロセスを通して、 著者の伝えようとすることを深く理解でき、能動的な気づきや学びが得られます。 またグループでの読書と対話によって、一人一人の能動的な読書体験を掛け合わせることで学びはさらに深まり、 新たな関係性が育まれてくる可能性も広がります。 ABDという、一人一人が内発的動機に基づいた読書を通して、 より良いステップを踏んでいくことを切に願っております。 流れ コ・サマライズ 本を持ちよるか1冊の本を裁断し、担当パートでわりふり、各自でパートごとに読み、要約を作ります。 リレー・プレゼン リレー形式で各自が要約文をプレゼンします。 ダイアログ 問いを立てて、感想や疑問について話しあい、深めます。 ABDの魅力 短時間で読書が可能 短時間で読書ができて、著者の想いや内容を深く理解できるので、本を積ん読している方にはピッタリです。 サマリーが残る アクティブ・ブック・ダイアローグ®後にサマリーが残るので、見直して復習したり、本を読んでいない人にも要点を伝えやすくなります。 記憶の定着率の高さ 発表を意識してインプットしてまとめた後、すぐにアウトプットをして意見交換をするので、深く記憶に定着します。 深い気づきと創発 多様な人どうし、それぞれの疑問や感想をもって対話することで、深い学びの創発が生まれます。 個人の多面的成長 集中力、要約力、発表力、コミュニケーション力、対話力など、今の時代に必要なリーダーシップを同時に磨けます。 共通言語が生まれる 同じメンバーで行うことで、同じレベルの知識を共有できるため、共通言語を作ることができます。 コミュニティ作り 本が1冊あれば仲間との対話や場を作れるので、気軽なコミュニティ作りに最適です。 何より楽しい! 本を読んで感動したり学んだ熱量をその場ですぐに共有できるので、豊かな学びが生まれ、何より読書が楽しくなります。 個人的には「1. 短時間で読書が可能」、「6. 共通言語が生まれる」、「7. コミュニティ作り」、「8. 何より楽しい!」が価値が高いなと感じました。 当日の様子 本が裁断され15パート分に分かれています。 こんな光景始めみました!笑 裁断された本 コ・サマライズ(20分) 各自が担当パートを読み、要約を作成します。 20分で本を読みA4用紙3枚にまとめるのですが、これがなかなか難しかったです。。。 時間に追われすぎていて撮影を忘れてしまいました。 リレー・プレゼン(1分30秒/人×15名) 各自が要約したものを、壁に貼り付けます。 みなさんが要約した資料 そして要約したものを1分30秒で発表します。 みなさん、要約もプレゼンもとても素晴らしかったです。 写真は私のプレゼンの様子です。1分30秒というすごく短い時間だったことと緊張で、何を話したか全く覚えていません。。。 私の発表の様子 ダイアログ(25分) ここでは、プレゼンの中から3つのパートを参加者でピックアップし、各グループに分かれて深掘りを行いました。 私はその中で「助け合えるチームになろう」のグループに参加させていただきました。 グループによる深掘りの様子 グループ内にはスクラムマスターやエンジニアリングマネージャーをされている方もおり、様々な意見交換をさせていただきました。 その中でも、「好きなこと」は、得意(十八番)/苦手(成長機会)関わらず伸ばしていくべきなので、好きなことに挑戦できるチーム作りをしたいね、という話題が印象的でした。 ABDを通して本から学んだこと 私自身はこれまで、目標管理として「OKR」(Objectives and Key Results)を用いたことがなかったのですが、OKRに関する理解が進みました。 また、目標づくりにおいては、いかに内発的動機によってチームとして目標を立てることが重要かを学びました。 そのためにも、トップダウンによる目標設定を行うのではなく、チーム間で議論を行なった上での目標づくりが鍵となることが印象的でした。 また、重要なのは「目標の達成」であり、「タスクの消化」ではないということも印象に残っております。 そのため「時には優先順位が低いタスクを捨てる勇気が必要である」という考えは、これまでの自分には無い考え方でした。 そして、目標達成のために「時間が無い」ということがあるかと思いますが、それを 本当に時間が無い 時間をかけて良いか分からない 意欲が湧かない というように分解されているのも初めて聞きました。 「本当に時間が無い」というのはイメージしやすいのですが、「時間をかけて良いか分からない」、「意欲が湧かない」というのは初めて聞きましたが、経験的に納得感がありました。 こちらに関しては、本に解決法なども記載されていたのであたらめて本を読んで復習したいです。 感想 初めてABDを体験いたしましたが、刺激的でとても楽しかったです。 当日参加されていたメンバーが、題材の本に興味がある方ばかりだったので、プレゼンやダイアログにおいても建設的な場であり学びも多かったです。 弊社でもABDを実践してみたいと思ったので、興味があるメンバーを募ってやってみたいな、と考えています。 一方で、下記に挙げるような理由から運用の難易度はかなり高いのではないかと思いました。 限られた時間内で進行する必要があるためファシリテーターのスキルが求められる。 コ・サマライズが難しく、参加者により要約やプレゼンのレベルに差が生じてしまいそう。 題材とする本の選定や、メンバー集めが難しそう。 私はこれまで何度か輪読会に参加したことがあるのですが、「長期間の催しから生じる継続の負担」、「(輪読会の形式によりますが)個人の作業負担」など、実際に行うには少々ハードルが高い読書法だと感じていました。 一方でABDは、短時間で一気に終了できるので輪読会で感じていたようなデメリットを解消できるとても良い読書法だと思います。 ただし、短時間が故に本の理解度が下がってしまうという、トレードオフは生じてしまうかと思います。 「題材とする本の選定」や「参加メンバーとの事前協議」をしっかり行なった上でどのような読書法が良いのかは検討の必要があると思いました。
アバター
Sharing How Great Was Our Group Reading Session " Learning from GitLab: How to Create the World's Most Advanced Remote Organization ". Hello, I am Awache ( @_awache ). We were so fascinated by the book " {2 Learning from GitLab: How to Create the World's Most Advanced Remote Organization - How to use documents to achieve maximum results without an office ]( https://www.amazon.co.jp/dp/4798179426 ) " that we decided to hold a group reading session with both people from the company and from outside. In this article, I'd like to share our efforts with you. But first, let me announce our next get together: We will be hosting the ‘finale’ of the group reading session for " Learning from GitLab: How to Create the World's Most Advanced Remote Organization " A bit sudden maybe but it’s important. You can see the details below: Connpass: Grand Finale of the group reading session for " Learning from GitLab: How to Create the World's Most Advanced Remote Organization " Date and time: 18:00 - 21:00 (Opening at 17:40) Thursday, April 25, 2024 Event Type: Offline Venue: Muromachi Office, KINTO Technologies Corporation This event is intended for those who have read the book and participated in the previous group reading sessions of " Learning from GitLab: How to Create the World's Most Advanced Remote Organization - How to use documents to achieve maximum results without an office ", but it is also open to those who are currently reading it or plan to do so in the future. We will discuss how the group reading session was conducted in each of the companies, how was the reception, and gather insights from the book, aiming to create an open forum for all participants to engage in the discussions. There are still spots available, so if you are interested, please join us! We'd like to find ways for everyone to enjoy the session casually. (Though the irony is not lost on me that we will be meeting in person to discuss remote organizations lol) Common Challenges at Group Reading Sessions Ensuring continuous participation Regular gatherings are necessary to complete reading a book in a group setting. By dividing the book into reasonable portions and meeting once a week, it would take about 2 to 3 months. Dropping out in the middle It is always challenging to continue anything for long periods of time, so it’s natural that one by one the number of participants dwindle as the meetings progress. If we fail to keep the participants motivated, we may end up with a very lonely group reading session. Difficulty of joining in the middle Given the nature of book readings, the hurdle for participation typically rises in the middle stages. As a result, the number of participants is likely to decrease, with opportunities to increase the number of participants being very rare. Sustaining leadership Various burdens for the leader Pre-preparations Securing time for participants Facilitation These are not one-offs, but continues until the end of the book. It takes a lot of motivation to continue doing things alone Recognizing differences in reading speed and understanding among participants The speed and comprehension level of reading varies depending on the participant. Without recognizing this, the group reading session will end up with a lot of tedious time without much discussion. So taking all the above into account, it is quite challenging to finish a book in a group setting, isn't it? I myself have many times dropped out in the middle a book or couldn't read it until the end. However, this time, I really wanted to share the ideas of this book widely within the company and I was strongly motivated to finish it until the end . So, I was trying to figure out how to solve these issues. For example, I hypothesized that we could approach the issue effectively by creating an opportunity to conduct group reading sessions on the same subject in different ways at multiple locations beyond the boundaries of the company, not only by ourselves but involving others, and share our findings at the end of the session. However, there are limits to what I can do alone. I consulted with @soudai1025 -san, who is a technical advisor in our DBRE team, and with his cooperation, we decided to hold “A Sodai-Naru (Great) Group Reading Session.” Organizing The Reading Session Several companies: Using a kickoff meeting as a trigger Will set a time of about three months to hold a series of group reading sessions internally, After which findings will be shared together at the end We decided to divide our event in the above three-stages. You can check out our kickoff session on YouTube: https://www.youtube.com/watch?v=IBgmGtpW15Q How to Start a Group Reading Session I will briefly describe what I prepared for the group reading session after the kickoff was over. Gathering team members First of all, we gathered willing participants through internal open channels. We reached out to people interested in group reading sessions and waited for volunteers to raise their hands. As a result, we got 14 people interested! Transcription (Transcribing a book) I was determined to transcribe the book from the moment I decided to lead the group reading session. I think that the action of transcribing, which enables reading, writing, and reviewing simultaneously, is an excellent activity for quickly understanding the content of a book. However, this book is over 300 pages, so one needs determination! lol Purchasing books in bulk It is mentioned in our career website as well, but KINTO Technologies allows you to purchase the books you need. Since several of the 14 people did not have it yet, we used this program and purchased the books in bulk for those who didn’t. Thinking how to proceed with our in-house group reading session I seriously considered how we could ensure that everyone enjoys the session without feeling any pressure whenever they attend. I will introduce the specific actions later. While I was pondering this, I realized that time was passing by fast and our in-house group reading session was set to February. The In-house Group Reading Session Kickoff Working Agreement I shared with the participants a summary of what kind of atmosphere I would like to create. Here are the details: This session is designed with the aim of minimizing pressure on the participants: Follow up with each other even if someone did not read the book For the first 10 minutes, we'll have quiet reading time The main focus is on discussion , and the output is made public to create an atmosphere in which even those who are not actually participating can join in during the process. Summarize every output and make it available to everyone Record the session via Zoom and publish it (whenever possible) The same content can be read multiple times Do not interfere when other participants are speaking Be respectful and accepting of what participants say Stimulate free discussion Conduct discussions in breakout rooms of up to 4 people Conduct discussions in small groups reduces the psychological barrier to speaking up and allows each person to bring up what they want to discuss Do not refuse participation from ROMs (Read Only Members) When participating, communicate each ones’ situation to the rest of the participants to create an accepting atmosphere. Things like: I won’t be able to talk today Due to where I am working from today, I may not be able to talk much Everyone actively creates output Minutes of discussions are actively logged by those who are available (e.g., those who can’t speak that day) How we proceeded with the group reading session I still believe that a certain timeline is desirable for ongoing discussions. Even if you are a little late but want to join the group reading session, it may be psychologically difficult to do so if the session is in the middle of a heated discussion. On the other hand, if you have some idea of what you are doing, for example, you may be able to join in the middle of the session because it is now quiet reading time. That's why I decided to create a clear structure for us. Basic Format Quiet reading time (10 minutes) Discussion time (30 minutes) Content sharing (20 minutes) Content of the Discussions What I could relate to What I could not relate to What I want to put into practice at KINTO Technologies Perhaps the results of what was put into practice could be shared in the next session Discussion content output The agenda during the discussion is described in Google Slide After the discussion, everyone shares the topics that came up Selection of tools to use Gather Gather was chosen as the web meeting tool. Since our main focus was discussion, the idea was to engage with individuals who were comfortable talking with those attending. With Zoom, you have to make a breakout room every time, and it's hard to sort them out. Gather, a virtual office space perfectly suited our needs to gather everyone together and later move to small rooms for discussion. However, it is not suitable for sharing recordings, so we gave up on that. Instead, we made sure to keep logs so that we can review them later. Microsoft Loop Loop was chosen as our collaboration tool. KINTO Technologies has been using Confluence for the most part, but it has had some weaknesses when it came to collaborative editing, with several participants writing notes freely. We decided on Loop because the experience is not so different from Confluence, but it is less stressful. Setting Up Additional Meetings The time for our group reading session was set for every Tuesday from 18:00 to 19:00. It was a bit late, and it might overlap with prime time for those who have unexpected business schedules or for those with children. If you miss participation even once, the psychological hurdle to rejoining becomes higher. So, I decided to hold exactly the same content on the following Wednesday but from 12:00 to 13:00. This reduced the risk of missed participation, and the participants who attended the day before to have time to understand the content in more depth. Moreover, listening to other participants' perspectives provided them with new insights, making each session more enjoyable. Leveraging Generative AI As I mentioned in the Working Agreement above, I had a strong desire to create a place where people can still follow up each other without having to read the book. Although there is a quiet reading time in the first 10 minutes, it is rather challenging to read the required amount within 10 minutes. The strong allies that helped us were transcribing and ChatGPT. By summarizing the transcribed text by ChatGPT as much as needed, we realized even 10 minutes of quiet reading time can make a big difference in the quality of participants' input. For example, here is a summary of the first part. Don't you think that silent reading time would be effective when you can condense about 12 pages into this amount? ![AI Brief Summary](/assets/blog/authors/_awache/20240422/AIざっくり要約.png =750x) The original text is also available in Confluence, so if you find something you are interested in the summary, you can search for keywords to quickly find the point. Personally, this was such an important factor that I believe it was the main reason we were able to make it till the end. In-house Group Reading Session ![Group Reading Session](/assets/blog/authors/_awache/20240422/gather.png =750x) As a result, a total of 17 group reading sessions were held. I was able to make it through all 17 without being left alone until the end lol. Some people participated fully, while others came whenever they could. Despite variations in the number of participants per session due to some sessions being repeated, I found the number to be quite good in terms of participation per chapter. Part 1: Understanding the benefits of remote organizations / Part 2: Process to parallel the world's most advanced remote organization February 13, 2024 (6 participants) February 14, 2024 (9 participants) Chapter 5: Culture is fostered by value February 21, 2024 (8 participants) February 27, 2024 (4 participants) February 28, 2024 (4 participants) Chapter 6: Rules of communication March 5, 2024 (7 participants) March 6, 2024 (5 participants) Chapter 7: The importance of onboarding in remote organizations / Chapter 8: Fostering psychological safety March 13, 2024 (7 participants) March 19, 2024 (5 participants) Chapter 9: Bringing out individual performance / Chapter 10: Human resource system based on GitHub Value March 26, 2024 (7 participants) March 27, 2024 (5 participants) Chapter 11: Managerial roles and mechanisms to support management & Chapter 12: Achieving conditioning April 2, 2024 (6 participants) April 3, 2024 (7 participants) Chapter 13: Using L&D to improve performance and engagement & Conclusion April 9, 2024 (7 participants) April 10, 2024 (5 participants) Wrap up! April 16, 2024 (5 participants) April 17, 2024 (4 participants) To learn more about what was discussed, please join us for the Finale of the group reading session for " Learning from GitLab: How to Create the World's Most Advanced Remote Organization "! Ultimately, this book contains extensive information on what we should be aiming for, and there were intense discussions about how it may collide with reality, which is a little difficult to write about. So, please let me talk about this topic on another day. What we gained and produced through this group reading session Connections I was able to learn about the thoughts of those who participated through this group reading session, and I will continue to cherish these connections as I work to make KINTO Technologies a more exciting place to work. We have a channel called #thanks where we can openly express our gratitude towards each other. I was also very happy to receive warm messages from the participants on the final day of the group reading session. ! [thanks] (/assets/blog/authors/_awache/20240422/thanks.png = 750x) Transcription I feel that transcribing is an important process if I want to continue to lead group reading session in the future, as it allowed me to respond to the AI summary and various other topics that came up during the discussions. AI Summary Summaries output using generative AI are really powerful. You may forget where and what was written over time, but if you have a summary, a quick 10-minute look can recall your memory. Mandala Chart In my own way, I summarized the key points of this book in a mandala chart template. Of course, it is impossible to do everything, so I would like to set points and themes and increase what I can do little by little. Conclusion How was the group reading session for " " Learning from GitLab: How to Create the World's Most Advanced Remote Organization - How to use documents to achieve maximum results without an office "? I was personally more satisfied with this session than any other I have held before, which is why I felt compelled to share it on our Tech Blog. In truth, there is much more I would like to write, but it would be too long, so I will stop here for now. Reminder: We will be hosting the ‘finale’ of the group reading session for " Learning from GitLab: How to Create the World's Most Advanced Remote Organization " There are still spots available. We will do our best to make it enjoyable session as well, so if you are willing to come, please apply! Thank you very much. Cnnpass: Grand Finale of the group reading session for " Learning from GitLab: How to Create the World's Most Advanced Remote Organization " Date and time: 18:00 - 21:00 (Opening at 17:40) Thursday, April 25, 2024 Event Type: Offline Venue: Muromachi Office, KINTO Technologies Corporation This event is intended for those who have already read the book and participated in the previous group reading sessions of " Learning from GitLab: How to Create the World's Most Advanced Remote Organization - How to use documents to achieve maximum results without an office ", but it is also open to those who are currently reading it or plan to do so in the future. We look forward to seeing you at the event! See you!
アバター
Hi! I’m Ryomm, developing the iOS app my route at KINTO Technologies. My fellow developers, Hosaka-san and Chang-san, along with another business partner and I, successfully implemented and integrated our Snapshot Testing. Introduction Currently, the my route app team is moving towards transitioning to SwiftUI, so we have decided to implement Snapshot Testing as a foundational step. We began this transition by initially replacing only the content, while keeping UIViewController as the base. This approach ensures that the implemented Snapshot Testing will be directly applicable. Let me introduce the techniques and trial-and-error methods we used to apply Snapshot Testing to an app built with UIKit. What is Snapshot Testing? It is a type of testing that verifies whether there are any differences between screenshots taken before and after code modifications. We use the Point-Free library for modifications https://github.com/pointfreeco/swift-snapshot-testing . While developing my route , we extend XCTestCase to create a method that wraps assertSnapshots as follows: We determined the threshold to be at 98.5% after various trials to ensure that very fine tolerance variances were accommodated successfully. extension XCTestCase { var precision: Float { 0.985 } func testSnapshot(vc: UIViewController, record: Bool = false, file: StaticString, function: String, line: UInt) { assert(UIDevice.current.name == "iPhone 15", "Please run the test by iPhone 15") // SnapshotConfig is an enum that specifies the list of devices to be tested SnapshotConfig.allCases.forEach { assertSnapshots(matching: vc, as: [.image(on: $0.viewImageConfig, precision: precision)], record: record, file: file, testName: function + $0.rawValue, line: line) } } } The Snapshot Testing for each screen is written as follows. final class SampleVCTests: XCTestCase { // snapshot test whether it is in recording mode or not var record = false func testViewController() throws { let SampleVC = SampleVC(coder: coder) let navi = UINavigationController(rootViewController: SampleVC) navi.modalPresentationStyle = .fullScreen // This is where the lifecycle methods are called UIApplication.shared.rootViewController = navi // The lifecycle methods starting from viewDidLoad are invoked for each test device testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } } Tips We need to wait for the data fetched by the API to be reflected in the View after the viewWillAppear method and subsequent methods. To ensure the Snapshot Testing run after the API data is reflected in View, we have encountered issues where the tests execute too early, causing problems like the indicator still being visible. Since it is difficult to determine if the data from the API call has been reflected in the view, we will implement a delegate to handle this verification. protocol BaseViewControllerDelegate: AnyObject { func viewDidDraw() } In the ViewController class, create a delegate property that conforms to the previously prepared delegate. If no delegate is specified during initialization, this property defaults to nil. class SampleVC: BaseViewController { // ... weak var baseDelegate: BaseViewControllerDelegate? // .... init(baseDelegate: BaseViewControllerDelegate? = nil) { self.baseDelegate = baseDelegate super.init(nibName: nil, bundle: nil) } // ... } When calling the API and updating the view, for example, after receiving the results with Combine and reflecting them on the screen, call baseDelegate.viewDidDraw() . This notifies the Snapshot Testing that the view has been successfully updated with the data. someAPIResult.receive(on: DispatchQueue.main) .sink(receiveValue: { [weak self] result in guard let self else { return } switch result { case .success(let item): self.hideIndicator() self.updateView(with: item) // Timing of data reflection completion self.baseDelegate?.viewDidDraw() case .failure(let error): self.hideIndicator() self.showError(error: error) } }) .store(in: &cancellables) As we want to wait for baseDelegate.viewDidDraw() to be executed, we add XCTestExpectation to the Snapshot Testing. final class SampleVCTests: XCTestCase { var record = false var expectation: XCTestExpectation! func testViewController() throws { let SampleVC = SampleVC(coder: coder, baseDelegate: self) let navi = UINavigationController(rootViewController: SampleVC) navi.modalPresentationStyle = .fullScreen UIApplication.shared.rootViewController = navi expectation = expectation(description: "callSomeAPI finished") wait(for: [expectation], timeout: 5.0) viewController.baseViewControllerDelegate = nil testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } func viewDidDraw() { expectation.fulfill() } } When there are multiple sets of data to be retrieved from the API that need to be reflected (when calling baseDelegate.viewDidDraw() in multiple places), you can specify expectedFulfillmentCount or assertForOverFulfill . final class SampleVCTests: XCTestCase { var record = false var expectation: XCTestExpectation! func testViewController() throws { let SampleVC = SampleVC(coder: coder, baseDelegate: self) let navi = UINavigationController(rootViewController: SampleVC) navi.modalPresentationStyle = .fullScreen UIApplication.shared.rootViewController = navi expectation = expectation(description: "callSomeAPI finished") // When viewDidDraw() is called twice expectation.expectedFulfillmentCount = 2 // When viewDidDraw() is called more times than specified, any additional calls should be ignored expectation.assertForOverFulfill = false wait(for: [expectation], timeout: 5.0) viewController.baseViewControllerDelegate = nil testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } func viewDidDraw() { expectation.fulfill() } } If the baseViewControllerDelegate from the previous screen remains active, running the Snapshot Testing across all screens will call viewDidLoad and subsequent lifecycle methods for each test device every time testSnapshot() is invoked. This causes the API to be called multiple times and viewDidDraw() to be executed repeatedly, resulting in multiple calls error. Therefore, we clear the baseViewControllerDelegate after calling wait() . Frame misalignment on devices While Snapshot Testing can generate snapshots for multiple devices, we encountered issues where the layout and size of elements were misaligned on some devices. Misaligned This issue is caused by the lifecycle of the Snapshot Testing execution. In a Snapshot Testing, it starts loading on one device, and then other devices are rendered by changing the size without reloading. This means that viewDidLoad() is executed only once at the beginning, and for the other devices, it starts from viewWillAppear() . As a solution, create a MockViewController that wraps the viewcontroller you want to test. Override viewWillAppear() to call the methods that are originally called in viewDidLoad() . import XCTest @testable import App final class SampleVCTests: XCTestCase { // snapshot test whether it is in recording mode or not var record = false func testViewController() throws { // Write it the same way as when calling the screen let storyboard = UIStoryboard(name: "Sample", bundle: nil) let SampleVC = storyboard.instantiateViewController(identifier: "Sample") { coder in // VC wrapped for Snapshot Test MockSampleVC(coder: coder, completeHander: nil) } let navi = UINavigationController(rootViewController: SampleVC) navi.modalPresentationStyle = .fullScreen UIApplication.shared.rootViewController = navi testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } } class MockSampleVC: SampleVC { required init?(coder: NSCoder) { fatalError("init(coder: \\(coder) has not been implemented") } override init?(coder: NSCoder, completeHander: ((_ readString: String?) -> Void)? = nil) { super.init(coder: coder, completeHander: completeHander) } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) // The following methods are originally called in viewDidLoad() super.setNavigationBar() super.setCameraPreviewMask() super.cameraPreview() super.stopCamera() } } Still not fixed・・・ If the rendering is still misaligned, calling the layoutIfNeeded() method to update the frames often resolves the issue. import XCTest @testable import App final class SampleVCTests: XCTestCase { var record = false func testViewController() throws { let storyboard = UIStoryboard(name: "Sample", bundle: nil) let SampleVC = storyboard.instantiateViewController(identifier: "Sample") { coder in MockSampleVC(coder: coder, completeHander: nil) } let navi = UINavigationController(rootViewController: SampleVC) navi.modalPresentationStyle = .fullScreen UIApplication.shared.rootViewController = navi testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } } fileprivate class MockSampleVC: SampleVC { required init?(coder: NSCoder) { fatalError("init(coder: \\(coder) has not been implemented") } override init?(coder: NSCoder, completeHander: ((_ readString: String?) -> Void)? = nil) { super.init(coder: coder, completeHander: completeHander) } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) // Update the frame before calling rendering methods self.videoView.layoutIfNeeded() self.targetView.layoutIfNeeded() super.setNavigationBar() super.setCameraPreviewMask() super.cameraPreview() super.stopCamera() } } Looks good Snapshot for WebView screens There are situations where you may want to apply Snapshot Testing to toolbars to other elements, but not the content displayed in a Webview. In such cases, it is good to separate the part that loads the WebView content from the WebView’s configuration and mock the loading part during tests. For the implementation, we separate the method that calls self.WebView.load(urlRequest) etc. to display the Webview content from the method that configures the WebView itself. // Implementation in the VC class SampleWebviewVC: BaseViewController { // ... override func viewDidLoad() { super.viewDidLoad() self.setNavigationBar() **self.setWebView()** self.setToolBar() } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) **self.setWebViewContent()** } // ... /** * Separate the method for configuring the WebView from the method for setting its content */ /// Configure the WebView func setWebView() { self.webView.uiDelegate = self self.webView.navigationDelegate = self // Monitor the loading state of the web page webViewObservers.append(self.webView.observe(\\.estimatedProgress, options: .new) { [weak self] _, change in guard let self = self else { return } if let newValue = change.newValue { self.loadingProgress.setProgress(Float(newValue), animated: true) } }) } /// Set content for the WebView private func setWebViewContent() { let request = URLRequest(url: self.url, cachePolicy: .reloadIgnoringLocalCacheData, timeoutInterval: 60) self.webView.load(request) } // ... } Then, in the mock that wraps the VC under test, we make it so that the method that loads the WebView content is not called. import XCTest @testable import App final class SampleWebviewVCTests: XCTestCase { private let record = false func testViewController() throws { let storyboard = UIStoryboard(name: "SampleWebview", bundle: .main) let SampleWebviewVC = storyboard.instantiateViewController(identifier: "SampleWebview") { coder in MockSampleWebviewVC(coder: coder, url: URL(string: "<https://top.myroute.fun/>")!, linkType: .Foobar) } let navi = UINavigationController(rootViewController: SampleWebviewVC) navi.modalPresentationStyle = .fullScreen UIApplication.shared.rootViewController = navi testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } } fileprivate class MockSampleWebviewVC: SampleWebviewVC { override init?(coder: NSCoder, url: URL, linkType: LinkNamesItem?) { super.init(coder: coder, url: url, linkType: linkType) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } override func viewWillAppear(_ animated: Bool) { // Change the method that was called in viewDidLoad to be called in viewWillAppear self.setNavigationBar() self.setWebView() self.setToolBar() super.viewWillAppear(animated) } override func viewDidAppear(_ animated: Bool) { // Do nothing // Override to avoid calling the method that sets the WebView content } } Snapshot of the screen that is calling the camera Call the camera and also take the snapshot of the screen which displays a customized view. However, since the camera does not work on the simulator, it is necessary to find a way to disable the camera part while still being able to test the overlay. There was also a suggestion to insert a dummy image to make the camera work on the simulator, but it seems too costly to implement this just for the Snapshot Testing of a non-primary screen. In myroute’s Snapshot Testing, we used mocks to override the parts that handle the camera input and the parts that set up the capture to be displayed in AVCaptureVideoPreviewLayer, so they are not called. This way, the AVCaptureVideoPreviewLayer displays as a blank screen without any input, allowing the customized View to be shown on top. In the actual implementation, it is written as follows: class UseCameraVC: BaseViewController { // ... override func viewDidLoad() { super.viewDidLoad() self.videoView.layoutIfNeeded() setNavigationBar() setCameraPreviewMask() do { guard let videoDevice = AVCaptureDevice.default(for: AVMediaType.video) else { return } let videoInput = try AVCaptureDeviceInput(device: videoDevice) as AVCaptureDeviceInput if captureSession.canAddInput(videoInput) { captureSession.addInput(videoInput) let videoOutput = AVCaptureVideoDataOutput() if captureSession.canAddOutput(videoOutput) { captureSession.addOutput(videoOutput) videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main) } } } catch { return } cameraPreview() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) // Since the camera cannot be used in the simulator, disable it #if targetEnvironment(simulator) stopCamera() dismiss(animated: true) #else captureSession.startRunning() #endif } } Override them with mocks as follows: Due to the reasons described regarding the frame misalignment issue, we call the methods from viewWillAppear() that were originally called in viewDidLoad() . class MockUseCameraVC: UseCameraVC { // ... override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) self.videoView.layoutIfNeeded() super.setNavigationBar() super.setCameraPreviewMask() super.cameraPreview() super.stopCamera() } } The cameraPreview() method uses AVCaptureVideoPreviewLayer to display the camera image from the captureSession , but since we override it to have no input, it renders as a white view. CI Strategy At the initial stage of introducing Snapshot Testing, we uploaded reference images to a single S3 bucket. During reviews, we downloaded the reference images each time and ran the tests. However, when a view was modified and the reference images were updated simultaneously, there was an issue where tests for other PRs would fail until the PR with the updated reference images was merged. To address the issue, we created two directories within the bucket hosting the reference images. One directory hosts the images during PR reviews, and once a PR is merged, the images are copied to the other directory. By doing so, we ensure that updates to the reference images do not interfere with the tests of other PRs. Useful shells my route provides four shells for snapshots. The first one downloads all the reference images for the current screen. This allows the tests to pass locally. Used when switching from the # develop branch # Example: Sh setup_snapshot.sh # Clean up the old files from the reference images directory rm -r AppTests/Snapshot/__Snapshots__/ # Download reference images from S3 aws s3 cp $awspath/AppTests/Snapshot/__Snapshots__ --recursive --profile user The second shell uploads modified reference images to the PR review S3 bucket when creating a Pull Request. # When creating a PR, upload the modified tests as arguments. # Example: Sh upload_snapshot.sh ×××Tests path="./SpotTests/Snapshot/__Snapshots__" awspath="s3://strl-mrt-web-s3b-mat-001-jjkn32-e/mobile-app-test/ios/feature/__Snapshots__" if [ $# = 0 ]; then echo "No arguments provided" else for testName in "${@}"; do if [[ $testName == *"Tests"* ]]; then echo "$path/$testName" aws s3 cp "$path/$testName" "$awspath/$testName" --exclude ".DS_Store" --recursive --profile user else echo "($0testName) No tests found" fi done fi The third shell individually downloads the reference images for the modified screens. It is used when reviewing a Pull Requests that includes screen changes. # When reviewing tests, download the reference images for the specific tests # Example: Sh download_snapshot.sh ×××Tests if [ $# = 0 ]; then echo "No arguments provided" else rm -r AppTests/Snapshot/__Snapshots__/ for testName in "${@}"; do if [[ $testName == *"Tests"* ]]; then echo "$localpath/$testName" aws s3 cp "$awspath/$testName" "$localpath/$testName" --recursive --profile user else echo "($0testName) No tests found" fi done fi The fourth shell forcibly updates the reference images. Although it is basically unnecessary because the reference images for screens with modified test files are automatically copied, it is useful when changes to reference images occur without modifying the test files, such as when common components are updated. # If changes affect reference images other than the modified test files, (for example, when common components are updated), # Please upload manually # Please use it after merging # Example: Sh force_upload_snapshot.sh × × × Tests if [ $# = 0 ]; then echo "No arguments provided" else echo "Do you want to forcibly upload to the AWS S3 develop folder? 【yes/no】" read question if [ $question = "yes" ]; then for testName in "${@}"; do if [[ $testName == *"Tests"* ]]; then echo "$localpath/$testName" aws s3 cp "$localpath/$testName" "$awsFeaturePath/$testName" --exclude ".DS_Store" --recursive --profile user aws s3 cp "$localpath/$testName" "$awsDevelopPath/$testName" --exclude ".DS_Store" --recursive --profile user else echo "($testName) No tests found" fi done else echo "Termination" fi fi Since having four shells can be confusing regarding when and who should use them, we have defined them in the Taskfile and made the explanations easily accessible. When executing, we have to use -- when passing arguments such as specifying file names, making the command bit longer. As a result, we often call the shells directly. However, having this setup is valuable just for the sake of clear explanations. % task task: [default] task -l --sort none task: Available tasks for this project: * default: show commands * setup_snapshot: [For Assignee] [After branch switch] Used when making changes to Snapshot Testing after switching from the develop branch. (Example) task setup_snapshot or sh setup_snapshot.sh * upload_snapshot: [For Assignee] [During PR creation] Upload the snapshot images to the S3 bucket for PR review by passing the modified tests as arguments (Example) task upload_snapshot -- ×××Tests or sh upload_snapshot.sh ×××Tests * Download_snapshot: [For Reviewer] [During review] Download the reference images by passing the relevant tests as arguments (Example) task download_snapshot -- ×××Tests or sh download_snapshot.sh ×××Tests * force_upload_snapshot: [For Assignee] [After merging] If changes affect reference images other than the modified test files, (for example, when common components are updated), manually upload the changes by passing the modified tests as arguments. (Example) task force_upload_snapshot -- ×××Tests or sh force_upload_snapshot.sh ×××Tests Additionally, this is something I have set up personally, but I find it convenient to have an alias that changes the hardcoded profile name in the shell to the profile configured in your environment. (For those who prefer their own profile names) In this case, the profile hardcoded as user is changed to myroute-user . alias sett="gsed -i 's/user/myroute-user/' setup_snapshot.sh && gsed -i 's/user/myroute-user/' upload_snapshot.sh && gsed -i 's/user/myroute-user/' download_snapshot.sh && gsed -i 's/user/myroute-user/' force_upload_snapshot.sh" Bitrise In my route , we use Bitrise for CI. When a PR that includes changes to Snapshot Testing is merged, Bitrise automatically detects these changes and copies the reference images from the feature folder to the develop folder. This ensures that the snapshot tests always run correctly in all situations. Detecting subtle differences in reference images Sometimes, differences are too subtle to see with the naked eye, but snapshot tests will still detect them and report errors. Can’t see anything (3_3)? In such cases, using ImageMagick to overlay the images can help you spot the differences more easily. By running the following command: convert Snapshot/refarence.png -color-matrix "6x3: 1 0 0 0 0 0.4 0 1 0 0 0 0 0 0 1 0 0 0" ~/changeColor.png \ && magick Snapshot/failure.png ~/changeColor.png -compose dissolve -define compose:args='60,100' -composite ~/Desktop/blend.png \ && rm ~/changeColor.png You can see the overlaid images. Changing the hue of the reference image to a reddish tint before overlaying can make it easier to spot differences. For added convenience, I recommend adding this command to your bashrc. compare() { convert $1 -color-matrix "6x3: 1 0 0 0 0 0.4 0 1 0 0 0 0 0 0 1 0 0 0" ~/Desktop/changeColor.png; magick $1 ~/Desktop/changeColor.png -compose dissolve -define compose:args='60,100' -composite ~/Desktop/blend.png; rm ~/Desktop/changeColor.png } If the files are generally placed in the same location, you may only need to pass the test name as an argument instead of the entire path. Additionally, since images hosted online can also be processed, this method can be useful during reviews. To wrap things up, I bring Surprise Interviews! I interviewed my colleagues to get feedback on the implementation of Snapshot Test! Chang-san said: "Thanks to Hosaka-san’s initial research, we are now able to handle snapshots in a more convenient way. With the help of Ryomm-san, various implementation methods were organized into documents to ensure we didn’t forget anything. It has been really great, and I am very greatful🙇‍♂️. Hosaka-san said: “The biggest bottleneck is the time it takes to run full tests, so I would like to work on reducing that in the future." As for myself, I’ve noticed the frustration of having to fix Snapshot Tests when the logic changes but the screen remains unaffected. However, it’s been helpful to confirm that there were no differences when transitioning to SwiftUI, which I think was good!
アバター
はじめに こんにちは。 KINTOテクノロジーズ モバイルアプリ開発グループの中口です。 KINTOかんたん申し込みアプリのiOSチームでチームリーダーをしており、チームビルディングの一環として180度フィードバックを実施しましたのでその内容を共有します。 こちらのチーム振り返り会の記事 も同じチームで行った取り組みですので、ご興味あればご覧ください。 実施背景 先日、社内の有志メンバーで 「GitLabに学ぶ 世界最先端のリモート組織のつくりかた ドキュメントの活用でオフィスなしでも最大の成果を出すグローバル企業のしくみ」 の輪読会を行いました。 この輪読会については こちらの記事 で詳しくご紹介されています。 この輪読会は私にとって非常に刺激的で、ここでの学びを何かチームに持ち帰りたいと考えました。 その中でまず初めに興味を持ったものが360度フィードバックでした。 360度フィードバックとは、1人の従業員に対して同僚や上司、部下など複数人の視点からフィードバックをもらう評価手法です。 一般的にフィードバックと聞くと、上司が部下へフィードバックを行うことが多いかと思います。一方で私はメンバー同士、あるいは部下から上司へフィードバックを行うことも重要ではないかと常々考えていたため、この360度フィードバックを実施してみようと思いました。 ただし、輪読会の中で360度フィードバックは調査対象が広すぎたり、業務と直接関係のない人からの評価を受けたりなどデメリットもあるというお話があったので、調査対象を自チームのみに限定した180度フィードバックという手法を教えてもらい、そちらを行うことにしました。 狙い 私はこの180度フィードバックを通じて、以下のような狙いがありました。 チームが求める役割と、自身が認識している役割のギャップを知る 自身の強み、弱みを再認識し今後の成長に活かす チームメンバーが普段どんなことを思っているのか、本音を知る機会を作る アンケートに回答する過程で、メンバーのことを改めて考えることによりチームの一体感を高める この180度フィードバックは、関係の質を向上させるために、メンバー同士がお互いのことを理解し合う非常に良い機会になるのではないかと考えました。 実施方法 対象メンバー チームリーダー:1名 エンジニア:6名 調査方法 Microsoft Formsを使用し、匿名アンケートを実施 下記設問について自身を除く6名分を回答 定量評価(5段階評価) 積極的な姿勢に関する質問 相手を受け入れる姿勢に関する質問 意思決定における姿勢に関する質問 やりきる姿勢に関する質問 未知からの習得に関する質問 自主性に関する質問 定性評価(フリーテキストによる評価) 強みに関する質問 改善ポイントに関する質問 対象者へ感謝の言葉を送る 進める上で工夫したこと メンバーに前向きに取り組んでもらうため、事前に1on1の中で実施する背景や目的を共有する。 完全に匿名であることを認識してもらうため、事前にテストアンケートを実施して、その結果を共有する。 アンケート回答時間が無い、などの理由でアンケートの回収率が下がることを避けるため、あらかじめアンケート回答のための時間を用意する。 フィードバックをする以上、厳しい言葉やネガティブに感じる言葉をコメントする可能性があります。少しでも前向きな気持ちでアンケートを終了してもらいたいので、アンケートの最後に日頃の感謝を伝える項目を用意する。(また、フィードバック結果を見る際も、感謝のコメントが綴られていることで前向きにフィードバックを受け取ってもらいやすくなる) チームリーダーとしてオープンな姿勢を示したかったので、私のフィードバック結果はチームメンバーに開示し、今後の改善ポイントなどを共有する。(ただしメンバーへは自身のフィードバック結果の開示を強要しない) 私のフィードバック結果の要約 私のフィードバック結果を、要約してみたので下記に記載いたします。 強み コミュニケーション力が高く、気さくに話せる。例えば、ミーティングでは積極的に発言し、分かりやすく説明するよう心がけている。 積極的に学び、他のメンバーと情報を共有。新しい技術動向を常に把握し、Slackやミーティングなどで共有している。 チームワーク向上のため努力。定期的にチームイベントを企画し、メンバー同士の親睦を深めている。 情報収集力やスピード感のある対応力。問題発生時には迅速に対応し、関係者に正確な情報を伝えている。 思いやりが強く、頼りになる。メンバーの悩みに耳を傾け、適切なアドバイスをしている。 改善点 プロダクトに対する仕様理解。機能の仕様を十分に理解せずに開発を進めてしまうことがある。 タスクチケットの整理をもう少し頻度を上げて行う。チケットの優先順位付けが不十分なため、重要なタスクが後回しになることがある。 施策を行う際に背景や目的をしっかりと説明する。施策の意図が伝わっていないため、メンバーの理解が不足することがある。 リスクをとった行動が少ない。新しい取り組みに対して慎重になりすぎ、チャンスを逃すことがある。 強みの部分では、日頃意識して取り組んでいる部分が評価されていると思ったので非常に嬉しかったです。 一方で改善点に関しては、自分自身が自覚していることだけでなく、自覚できていなかった部分についても気づくことができ、今後の成長に活かすことができると感じました。 また最後にメンバーからの感謝の言葉もいただき、とてもモチベーションが上がりました。 今後もより一層チームに貢献できるよう努めていきたいと思います。 180度フィードバックを通して気付いたチームの強みと改善点 チーム全体についても要約してみました。 チーム全体の強み 多様な技術力とリーダーシップ : メンバー各自が高い技術力とリーダーシップを持ち合わせている。 コミュニケーション能力 : チーム内のコミュニケーションが活発で、情報共有が効果的に行われている。 問題解決能力 : 技術的な課題や難易度の高いタスクに対する積極的な取り組み。 学習意欲 : 新しい知識や技術への取り組みが積極的で、常に成長を続けている。 チーム全体の改善点 情報共有の効率化 : 新しい技術やプロジェクトの情報をより効率的に共有する方法の改善。 役割分担の明確化 : メンバーの能力を最大限に活用するための役割分担と責任のさらなる明確化。 大局的視点の養成 : プロジェクト全体の視点を持ち、タスクの目的と過程をチーム全体で共有することを重視。 技術共有とナレッジマネジメント : 技術やナレッジのチーム内横展開を促進し、全メンバーのスキルアップを図る。 また、各チームメンバーの強みや役割を下記図のようにまとめてみました。 実施後アンケート 180度フィードバックを実施した後に、調査を行ってみてどうだったかアンケートを実施しました。 (回答数は7名です) 期待値の変化 実施前:7.29→実施後:9.19 NPS (NPSとは?) 57 定期的(半年毎など)に180度調査を実施したいと思いますか? 86%が「Yes」と回答 「”参加した後”の満足度について、その理由を教えてください(フリーテキスト)」のAI要約 アンケートの結果から、回答者は自己認識を深め、自分の課題を見つけることができたと感じています。 また、他者の視点からのフィードバックを通じて、普段気づかない観点を得ることができ、 具体的な評価や改善点を知ることで、今後の行動指針が明確になったと述べています。 これらの結果は、アンケートが有効な自己反省のツールであることを示しています。 まとめ 今回実施した180度フィードバックに関して、運営面での下記のような課題がありました。 回答全体の平均点が高く、差がつきにくかった。 メンバーの入れ替えのタイミングと重なってしまい、一部のメンバーに適切なフィードバックとならなかった。 ただ、全体的には私も含めてメンバーの満足度の高いフィードバックができたと感じています。 アンケート結果からもわかる通り、メンバーの定期的な実施意向も高いため今後も引き続き取り組んでいきたいと思います。 今回のフィードバック結果を受け、私自身やチーム全体としての課題を再認識できましたので、今後の成長に活かしていきたいと思います。 また、メンバーも同様にそれぞれの課題を見つけて成長の機会としていただければとても嬉しいです。
アバター
Introduction (Overview of Activities) We started the "Manabi-no-Michi-no-Eki” at KINTO Technologies! So you'd ask, what is "Manabi (learning) + Michi-no-Eki (roadside station)" about? At our company, we do our best to foster a culture of output by hosting different activities including this Tech Blog, by presenting at events, or promoting various other initiatives. So, what drives our focus on output? We believe that input, or what we have learned, is a crucial prerequisite for output. That is why we created a team dedicated to strengthening our internal learning capabilities, initiated by volunteers within our company. The name "Michi-no-Eki (roadside station)" incorporates various ideas as well. Have you ever been to roadside stations in Japan? It gathers products from local communities, provides rest for travelers, and serve as hubs where you can encounter unique experiences found nowhere else. That's where our idea of Manabi-no-Michi-no-Eki (Roadside Station of Learning) comes from: a desire to create a unique place where everyone on the journey of learning can drop by, be thrilled by new encounters , and come together to be uplifted . What Does the Manabi-no-Michi-no-Eki Do? As a "roadside station" where study groups and workshops intersect, we aim to support internal activities centered around study sessions: Engaging in internal communications Letting everyone know what study groups are being held. Sharing what the current study groups are like. Supporting study groups For those who say, 'I want to start a study group', but I don’t know how to. For those who are organizing study groups but want to improve them. Offering advice on other concerns. Asking the Organizers: What Ideas Led to the Creation of 'Manabi-no-Michi-no-Eki'? Nakanishi: I have always believed that life is about learning. People constantly seek knowledge to find meaning in life, to find a place of solace in their hearts, and to energize their lives. The most fascinating people I have met so far who impressed me the most are those who are constantly learning new things; they shine the most. We believe that creating a company-wide space for colleagues to gather would enhance our daily work output. However, we began receiving feedback about the scattered information on internal study groups and a desire to understand the available learning environments. This prompted the launch of this project. HOKA: Working in human resources, I often hear during employee interviews a common desire for increased communication across different groups. This sparked a feeling that I wanted to do something about it. At the same time, through my work I have observed that successful people in KINTO Technologies often participate in study groups. These two points intersected, sparking the idea of creating a system where people could interact with each other while learning. When I discussed this idea with my boss, he introduced me to Kinchan and Nakanishi-san, and that is how the "Manabi-no-Michi-no-Eki" project was born. Kinchan: I have been involved with the culture of study groups on various occasions over the past 15 years. When I joined KINTO Technologies, I found that the company already had a good culture where learning is an integral part of everyday work. I wanted to expand this positive culture even further and contribute to the growth of people, our organization, and our business. That is why we've decided to take action by gathering information about study groups across the company. Establishment Step 1: Compile information on internal study groups! KINTO Technologies is an organization where voluntary learning activities led by employees such as study groups and reading circles are very active. Various study groups are held within the company, but questions often arise, such as 'where and when are they held?'. Some employees want to learn more about what's available. Having heard many voices, I wanted to give them more visibility. This was the starting point of our activities. We quickly gathered information and discovered that there were about 40 study groups. We were also aware of the existence of other hidden study groups, so we estimated that there were probably more than 60 groups in the company, including smaller ones. The three of us who found amazing that there were so many active study groups, started discussions at the end of November 2023. Step 2: What shall we do? In our first meeting, we listed what we wanted to do. Should we just storm into these study groups? Should we post about them on the Tech Blog more often? Many ideas came up, but we settled on the premise that it would be important to let people know about us internally first. So, we decided to participate in an in-house LT (Lightning Talk) event, which was to be held three weeks later on December 21. Without mentioning the "Manabi-no-Michi-no-Eki" yet, each of us three took the stage at it, and Kinchan won (yay!). First, we took action to make ourselves known to people within the company. Note: For more information, please see our Tech Blog article about the LT Event. ↓↓ We Held an In-House-Only LT (Lightning Talk) Event! Step 3: Make an inception deck! At our December 27, 2023 meeting, we realized the need for guidelines because we have so many things we wanted to do. We decided to create an "inception deck" from the beginning of the year. Inception deck is a software development tool to ensure that all team members have a common understanding of and goals for the development of a project. In ours, we clarified the following four points: Why We Are Here Elevator Pitch Not-To-Do List Our A Team By talking through the above, the name "Manabi-no-Michi-no-Eki (Roadside Station of Learning)" naturally came to mind, and we were able to decide on it without hesitation. In the process of creating our inception deck, we each shared our thoughts on learning with discussions of cooperative learning and about Peter Koenig’s Source Principle. It was a moment when I felt that the process of creating the inception deck itself was also a learning experience for us. And now: Let's Start the Engines! The inception deck was completed in late January 2024. When it was finished, we were a little impatient. We had a clear idea of our goals and tasks, and we were eager to get started right away. Kinchan, who proposed the inception deck, was probably secretly pleased, saying, 'just as expected.' As a first step to get things moving, we announced the birth of "Manabi-no-Michi-no-Eki" at the monthly All-Hands meeting with all KINTO Technologies members! At the same time, we also started the "Joining the next door study group" series. On February 22, we gathered everyone running study groups in a meeting room to interview them. Without having prepared any interview questions beforehand, we just pulled out our phones and recorded on the spot. Both the interviewers and the interviewees were very surprised. Although there was some confusion, they cooperated with us. (Thank you all!) We later edited unnecessary segments so that it could be played as a podcast, and we were able to successfully launch it to all employees via Slack on March 13. Our Next Steps We then run three study groups, published two podcasts, and published two blog articles, while reflecting and discussing our future! What do people want to know? Are they interested in the study groups? What do the organizers want people to know? As a result of the discussion, we came to the conclusion that "the purpose and needs of each study group are different. It would be better to individually assemble a story tailored to each of their characteristics." Moreover, What would be the role of our podcasts? Content as an advertisement for the study group? Content as internal newsletters? After considering these points, we came to the conclusion that "KINTO Technologies holds so many study groups," that to sum it up, "our goal will be achieved if we can give visibility to how rooted our study culture is." As for the future, we have decided to proceed with the activity of creating podcasts, running study groups, learning from any failures, and expanding wherever possible! In fact, I was a bit nervous about this agile approach—iterating, correcting, and steering things in a better direction. Before joining KINTO Technologies, I worked for a company with rigid rules and flows for handling information. As one of the organizers of 'Manabi-no-Michi-no-Eki,' this is an opportunity for me to learn about KINTO Technologies' development style of 'Make Small, Grow Big' while working in HR. The "Manabi-no-Michi-no-Eki" has just begun. We look forward to keeping you updated about it on the KINTO Tech Blog from time to time. Thank you very much for your support!
アバター
An Issue We Encountered During Testing With Spring Batch using DBUnit Introduction Hello. I am Takehana from the Payment Platform Team, Common Service Development Group[^1][^2][^3][^4][^5][^6] at the Platform Development Division. This time, I would like to write about an issue that we encountered while testing with Spring Batch + DBUnit. Environment Libraries, etc. Version Java 17 MySQL 8.0.23 Spring Boot 3.1.5 Spring Boot Batch 3.1.5 JUnit 5.10.0 Spring Test DBUnit 1.3.0 Encountered Issues We are using DB unit for testing Spring Boot 3 with Spring Batch. The Batch process follows the Chunk model, where ItemReader performs DB searches, and ItemWriter updates the DB. Given this setup, when running tests with data volumes exceeding the Chunk size, the tests did not complete... Investigations and Attempts Observations Code new StepBuilder("step", jobRepository) .<InputDto, OutputDto>chunk( CHUNK_SIZE, transactionManager) .reader(reader) .processor(processor) .writer(writer) .build(); I was testing a batch with the steps mentioned above as follows. @SpringBatchTest @SpringBootTest @TestPropertySource( properties = { "spring.batch.job.names: Foobar-batch", "targetDate: 2023-01-01", }) @Transactional(isolation = Isolation.SERIALIZABLE) @TestExecutionListeners({ DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class, TransactionDbUnitTestExecutionListener.class }) @DbUnitConfiguration(dataSetLoader = XlsDataSetLoader.class) class FoobarBatchJobTest { @Autowired private JobLauncherTestUtils jobLauncherTestUtils; @BeforeEach void setUp() { } @Test @DatabaseSetup("classpath:dbunit/test_data_import.xlsx") @ExpectedDatabase( value = "classpath:dbunit/data_expected.xlsx", assertionMode = DatabaseAssertionMode.NON_STRICT_UNORDERED) void launchJob() throws Exception { val jobExecution = jobLauncherTestUtils.launchJob(); assertEquals(ExitStatus.COMPLETED, jobExecution.getExitStatus()); } } When I set the test data to be less than the chunk size, the test passed without any issues. However, when the test data exceeded the chunk size, the test froze and never completed. (This occurred even with a chunk size of 1 and a data count of 1) Suspecting the issue might be on DB connections, I noted that Spring Batch treats each chunk as a single transaction. If processing in parallel, it would require more DB connections than the number of concurrent executions. So I adjusted the pool size to test this hypothesis. spring: datasource: hikari: maximum-pool-size: I changed 10 to 100 among other adjustments, but the issue was still not resolved… Start debugging I set up debug logs and ran the application to observe the behavior. The execution seemed to stop at the log output on line 88 of org.springframework.batch.core.step.item.ChunkOrientedTasklet . So, I set a breakpoint to verify. I then reached line 408 of org.springframework.batch.core.step.tasklet.TaskletStep . It appeared that the semaphore couldn’t acquire a lock (= waiting for the lock to be released), causing the execution to halt there. Delving deeper into Spring Batch Continuing my investigation, I traced the flow of execution in the step processing. The rough outline of the relevant parts is as follows. Execute doExecute of TaskletStep Create a semaphore Pass the semaphore to ChunkTransactionCallback , which is an implementation of TransactionSynchronization , link it with the transaction execution, and configure it in RepeatTemplate Step processing begins for the chunk The semaphore is locked in doInTransaction of TaskletStep Execute the main step processing The commit is executed by TransactionSynchronizationUtils` The AbstractPlatformTransactionManager ’s triggerAfterCompletion method is called, and the in-process invokeAfterCompletion` is executed. The semaphore is released in the afterCompletion method of the ChunkTransctionCallback by the invokeAfterCompletion. If data remains, return to 4 During this test run, the semaphore of 9 was not released, and it passed through 4 again and ended up freezing at 5 . Why was the semaphore not released...? During the review mentioned above, at Step semaphore release , I found the following condition in the relevant code. status.isNewSynchronization() did not become true , so invokeAfterCompletion was not executed. org.springframework.transaction.support.DefaultTransactionStatus#isNewSynchronization is as follows: /** * Return if a new transaction synchronization has been opened * for this transaction. */ public boolean isNewSynchronization() { return this.newSynchronization; } It returns whether a new transaction synchronization has been opened for this transaction. Considerations The current situation is that we haven’t fully traced yet why isNewSynchronization doesn’t become true . However, I thought I might be able to find some clues in the logs from our various trial and error attempts. If @Transactional is not applied to the test class 2024-03-27T08:57:14.527+0000 [Test worker] TRACE o.s.t.i.TransactionInterceptor - Completing transaction for [org.springframework.batch.core.repository.support.SimpleJobRepository.update] Foobar-batch 19 2024-03-27T08:57:14.527+0000 [Test worker] DEBUG o.s.orm.jpa.JpaTransactionManager - Initiating transaction commit Foobar-batch 19 2024-03-27T08:57:14.527+0000 [Test worker] DEBUG o.s.orm.jpa.JpaTransactionManager - Committing JPA transaction on EntityManager [SessionImpl(1075727694<open>)] Foobar-batch 19 2024-03-27T08:57:14.534+0000 [Test worker] DEBUG o.s.orm.jpa.JpaTransactionManager - Closing JPA EntityManager [SessionImpl(1075727694<open>)] after transaction Foobar-batch 19 2024-03-27T08:57:14.536+0000 [Test worker] DEBUG o.s.b.repeat.support.RepeatTemplate - Repeat operation about to start at count=2 Foobar-batch 19 If @Transactional is applied to the test class 2024-03-27T09:04:04.600+0000 [Test worker] TRACE o.s.t.i.TransactionInterceptor - Completing transaction for [org.springframework.batch.core.repository.support.SimpleJobRepository.update] Foobar-batch 20 2024-03-27T09:04:04.601+0000 [Test worker] DEBUG o.s.b.repeat.support.RepeatTemplate - Repeat operation about to start at count=2 Foobar-batch 20 When @Transactional is applied, "Initiating transaction commit..." from JpaTransactionManager with @Transactionalis not being logged. The test class uses TransactionalTestExecutionListener and executes within the same transaction using @Transactional . This ensures that the test data registered with DBUnit is accessible to code under test and is rolled back after the test is completed. However, I concluded that isNewSynchronization does not become true because existing transactions are being reused (a new transaction is not started) when the same step is executed. Workaround As a brute-force workaround to avoid using TransactionalTestExecutionListener , I performed the cleanup manually after each test, which successfully prevented the freeze. class FoobarTestExecutionListenerChain extends TestExecutionListenerChain { private static final Class<?>[] CHAIN = { FoobarTransactionalTestExecutionListener.class, DbUnitTestExecutionListener.class }; @Override protected Class<?>[] getChain() { return CHAIN; } } class HogeTransactionalTestExecutionListener implements TestExecutionListener { private static final String CREATE_BACKUP_TABLE_SQL = "CREATE TEMPORARY TABLE backup_%s AS SELECT * FROM %s"; private static final String TRUNCATE_TABLE_SQL = "TRUNCATE TABLE %s"; private static final String BACKUP_INSERT_SQL = "INSERT INTO %s SELECT * FROM backup_%s"; private static final List<String> TARGET_TABLE_NAMES = List.of( "Foobar", "fuga", "dadada"); /** * Create a test working table * * @param testContext * @throws Exception */ @Override public void beforeTestMethod(TestContext testContext) throws Exception { val dataSource = (DataSource) testContext.getApplicationContext().getBean("dataSource"); val jdbcTemp = new JdbcTemplate(dataSource); // Backup existing data to a temporary table before testing TARGET_TABLE_NAMES.forEach( tableName -> jdbcTemp.execute(String.format(CREATE_BACKUP_TABLE_SQL, tableName, tableName))); // Table initialization TARGET_TABLE_NAMES.forEach( tableName -> jdbcTemp.execute(String.format(TRUNCATE_TABLE_SQL, tableName))); } /** * Drop the test working table * * @param testContext * @throws Exception */ @Override public void afterTestMethod(TestContext testContext) throws Exception { val dataSource = (DataSource) testContext.getApplicationContext().getBean("dataSource"); val jdbcTemp = new JdbcTemplate(dataSource); // Restore the table TARGET_TABLE_NAMES.forEach( tableName -> jdbcTemp.execute(String.format(TRUNCATE_TABLE_SQL, tableName, tableName))); TARGET_TABLE_NAMES.forEach( tableName -> jdbcTemp.execute(String.format(BACKUP_INSERT_SQL, tableName, tableName))); } } Remove TransactionDbUnitTestExecutionListener and avoid using TransactionalTestExecutionListener. (Use DbUnitTestExecutionListener to lode the test data from Excel) Create a custom TestExecutionListener and move data from the target table to a temporary table during pre-processing, then restore it after the test. beforeTestMethod is executed before the test method, and afterTestMethod is executed after the test method. This approach made it possible to run tests while preserving Spring’s transaction management. Impressions Despite extensive searches, I couldn’t find satisfactory information, leaving the issue in a state of uncertainty. However, by looking further into the Spring Boot source code, I made various discoveries and it turned out to be a valuable learning experience through code reading. (Although I haven’t fully grasped everything yet…) I was wondering if I was fundamentally misunderstanding how to use Spring and the test libraries, questioning whether I was implementing them correctly based on the library creators’ assumptions and if there were more suitable classes available. This has highlighted that I still have much to learn. I would like to continue to approach exploration and improvement with the same curiosity, asking, “How does this work?” Thank you for reading this article. I hope this will be helpful to others facing similar issues. [^1]: Post 1 by a member of the Common Service Development Group [ Domain-Driven Design (DDD) incorporated in a payment platform intended to allow global expansion ] [^2]: Post 2 by a member of the Common Service Development Group [ Remote Mob Programming: How a Team of New Hires Achieved Success Developing a New System Within a Year ] [^3]: Post 3 by a member of the Common Service Development Group [ Efforts to Improve Deploy Traceability to Multiple Environments Utilizing GitHub and JIRA ] [^4]: Post 4 by a member of the Common Service Development Group [ Creating a Development Environment Using VS Code's Dev Container ] [^5]: Post 5 by a member of the Common Service Development Group [ Spring Boot 2 to 3 Upgrade: Procedure, Challenges, and Solutions ] [^6]: Post 6 by a member of the Common Service Development Group [ Guide to Building an S3 Local Development Environment Using MinIO (RELEASE.2023-10) ]
アバター
Introduction Hello. I am Nakaguchi from KINTO Technologies' Mobile App Development Group. I lead the iOS team for the KINTO Easy Application app which I will refer to as “the iOS team” in this article for convenience. We hold Retrospectives irregularly, but I find that they can be rather challenging. Am I succeeding in bringing out everyone's true feelings?? What are the team's real challenges?? Is my facilitation effective?? etc. I recently watched a webinar by Classmethod, Inc. and was so impressed by their session on "How to Build a Self-Managed Team" that I decided to apply for another training session they introduced in it about Retrospectives. In this article, I'll share my experience attending that session. Pre-Alignment Session Before the Retrospective, we had a meeting with Mr. Abe and Mr. Takayanagi from Classmethod. In order to hold Retrospectives that were best suited to our team’s situation, we discussed the current status of the iOS team with them for nearly an hour. Overview of the Retrospective On the day of the Retrospective, Mr. Takayanagi and Mr. Ito came to the company to facilitate the meeting. The meeting lasted for about two hours and followed this general flow: Self-introductions Aligning the purpose of our Retrospectives Individual exercise on “How to make the team a little bit better” Same content as above but in pairs Sharing the findings with the whole team Thinking about specific action plans in pairs Sharing the findings with the whole team Closing First Half Out of the almost two-hours meeting, it is worth noting that about half of the time was spent on "1. Self-introductions" and "2. Aligning the purpose of our Retrospectives". During the segment "1. Self-introductions", the facilitators asked us questions such as our names or nicknames, our roles in the team, or the extent of our interactions with other team members. They looked not only at the atmosphere of the team and the personality of each of us, but also at the relationships and compatibility between team members. During "2. Aligning the purpose of our Retrospectives", I got everyone to agree on what can be done to make the current team a little better , which was a topic I had requested. After a major release last September, our team is currently focused on improving features and refactoring, so although we are in a less busy spot, it seems that it is no easy feat to improve teams in our situation to make them a little better . I also explained the purpose, role, and expectations for each participant that I, as the meeting organizer, had in mind when inviting them. I was told that this helps clarify how everyone should participate and makes it easier for them to speak up. I think it was a good opportunity for me to talk about things that I usually don’t have the right timing for or that I can’t speak about directly. By spending this time in the first half of the meeting, we were able to create an atmosphere where it was easy for everyone to speak, and I felt that overall rapport was greatly improved. Facilitation Second Half After thinking about " 3. Making the team a little better" individually, we proceed on with the work. However, we didn’t use any framework related to retrospectives. Instead, we simply wrote down what could make the team a little better on sticky notes. We did individual work and then moved on to pair work. There are situations where pair work is beneficial and others where it is not. In this case, it seemed like the team benefited from it. Also, the combination of people is key, as it is important not to cause psychological strain amongst the participants. Pair Work After that, everyone gave presentations, and there were many opinions that I was not able to draw out in the Retrospectives I have held so far. I felt that I was able to draw them out through the rapport we built and the pair work in the first half. Then, based on the opinions that came up, everyone was asked to think about what specific actions should be taken and 6. Thinking about specific action plans in pairs. Then, each team presented their ideas. Presentation As a result, we decided to implement the following actions: Creating a Slack channel Having a place where everyone can chat freely Setting up a weekly meeting dedicated to chatting We could build more trust by talking more about ourselves, so we decided to create a private channel instead of a public one. Trying to gather together at meeting rooms as much as possible (as many people used to attend online to meetings from their desks even if they were in the office). Setting up guideline consultation meetings regarding assigned tasks Clearly stating the deadline on the task tickets We are addressing these issues as quickly as possible, starting the next day. Closing At the end of the meeting, Mr. Takayanagi talked about the importance of customizing meetings, such as understanding the time allocation of meetings, the characteristics of participants and to draw their opinions. In particular, at this Retrospective, he focused his facilitation on people , using a lot of pair work. Closing Post-Retrospective Survey Results Here are the results of the feedback survey taken after the Retrospective (out of 10 responses). Change in evaluation Before: 6.3 -> After: 9 NPS 80 (What is NPS?) AI summary of "How satisfied did you feel after you participated?" (free text) The survey results showed that participants were happy with the session and the facilitator's explanations. In addition, there were many positive comments about how specific decisions were made that led to the next actions. Furthermore, the opportunity to understand the thoughts of other team members, and the ability to listen to things that are not normally heard, were also highly evaluated. These results suggest that the meeting was meaningful for everyone. ** Just being above 0 was a great, but there was a whopping NPS of 80! ** Final Thoughts Through this Retrospective, I realized that there were many members who felt that there was a lack of communication, and we were able to focus on the next course of action so it was a very fulfilling Retrospective. I was happy to see from the questionnaire results that the participating members were also satisfied. I also realized that the role of the meeting facilitator is very important. This is a very advanced skill that cannot be acquired overnight, and I think that the organization should focus on developing and acquiring such skills. To start with, I would like to study facilitation and become able to conduct better meetings.
アバター
Introduction Hello. I am Nakaguchi from KINTO Technologies' Mobile App Development Group. I lead the iOS team for the KINTO Easy Application app which I will refer to as “the iOS team” in this article for convenience. We hold Retrospectives irregularly, but I find that they can be rather challenging. Am I succeeding in bringing out everyone's true feelings?? What are the team's real challenges?? Is my facilitation effective?? etc. I recently watched a webinar by Classmethod, Inc. and was so impressed by their session on "How to Build a Self-Managed Team" that I decided to apply for another training session they introduced in it about Retrospectives. In this article, I'll share my experience attending that session. Pre-Alignment Session Before the Retrospective, we had a meeting with Mr. Abe and Mr. Takayanagi from Classmethod. In order to hold Retrospectives that were best suited to our team’s situation, we discussed the current status of the iOS team with them for nearly an hour. Overview of the Retrospective On the day of the Retrospective, Mr. Takayanagi and Mr. Ito came to the company to facilitate the meeting. The meeting lasted for about two hours and followed this general flow: Self-introductions Aligning the purpose of our Retrospectives Individual exercise on “How to make the team a little bit better” Same content as above but in pairs Sharing the findings with the whole team Thinking about specific action plans in pairs Sharing the findings with the whole team Closing First Half Out of the almost two-hours meeting, it is worth noting that about half of the time was spent on "1. Self-introductions" and "2. Aligning the purpose of our Retrospectives". During the segment "1. Self-introductions", the facilitators asked us questions such as our names or nicknames, our roles in the team, or the extent of our interactions with other team members. They looked not only at the atmosphere of the team and the personality of each of us, but also at the relationships and compatibility between team members. During "2. Aligning the purpose of our Retrospectives", I got everyone to agree on what can be done to make the current team a little better , which was a topic I had requested. After a major release last September, our team is currently focused on improving features and refactoring, so although we are in a less busy spot, it seems that it is no easy feat to improve teams in our situation to make them a little better . I also explained the purpose, role, and expectations for each participant that I, as the meeting organizer, had in mind when inviting them. I was told that this helps clarify how everyone should participate and makes it easier for them to speak up. I think it was a good opportunity for me to talk about things that I usually don’t have the right timing for or that I can’t speak about directly. By spending this time in the first half of the meeting, we were able to create an atmosphere where it was easy for everyone to speak, and I felt that overall rapport was greatly improved. Facilitation Second Half After thinking about " 3. Making the team a little better" individually, we proceed on with the work. However, we didn’t use any framework related to retrospectives. Instead, we simply wrote down what could make the team a little better on sticky notes. We did individual work and then moved on to pair work. There are situations where pair work is beneficial and others where it is not. In this case, it seemed like the team benefited from it. Also, the combination of people is key, as it is important not to cause psychological strain amongst the participants. Pair Work After that, everyone gave presentations, and there were many opinions that I was not able to draw out in the Retrospectives I have held so far. I felt that I was able to draw them out through the rapport we built and the pair work in the first half. Then, based on the opinions that came up, everyone was asked to think about what specific actions should be taken and 6. Thinking about specific action plans in pairs. Then, each team presented their ideas. Presentation As a result, we decided to implement the following actions: Creating a Slack channel Having a place where everyone can chat freely Setting up a weekly meeting dedicated to chatting We could build more trust by talking more about ourselves, so we decided to create a private channel instead of a public one. Trying to gather together at meeting rooms as much as possible (as many people used to attend online to meetings from their desks even if they were in the office). Setting up guideline consultation meetings regarding assigned tasks Clearly stating the deadline on the task tickets We are addressing these issues as quickly as possible, starting the next day. Closing At the end of the meeting, Mr. Takayanagi talked about the importance of customizing meetings, such as understanding the time allocation of meetings, the characteristics of participants and to draw their opinions. In particular, at this Retrospective, he focused his facilitation on people , using a lot of pair work. Closing Post-Retrospective Survey Results Here are the results of the feedback survey taken after the Retrospective (out of 10 responses). Change in evaluation Before: 6.3 -> After: 9 NPS 80 (What is NPS?) AI summary of "How satisfied did you feel after you participated?" (free text) The survey results showed that participants were happy with the session and the facilitator's explanations. In addition, there were many positive comments about how specific decisions were made that led to the next actions. Furthermore, the opportunity to understand the thoughts of other team members, and the ability to listen to things that are not normally heard, were also highly evaluated. These results suggest that the meeting was meaningful for everyone. Just being above 0 was a great, but there was a whopping NPS of 80! Final Thoughts Through this Retrospective, I realized that there were many members who felt that there was a lack of communication, and we were able to focus on the next course of action so it was a very fulfilling Retrospective. I was happy to see from the questionnaire results that the participating members were also satisfied. I also realized that the role of the meeting facilitator is very important. This is a very advanced skill that cannot be acquired overnight, and I think that the organization should focus on developing and acquiring such skills. To start with, I would like to study facilitation and become able to conduct better meetings.
アバター
​KINTOサービスの認証基盤について、開発を担当しているPham Hoangです。本記事では、Global KINTO ID Platform (GKIDP) に実装されたパスキーについてお話します。 OpenID Summit Tokyo 2024 に参加して、OIDC と組み合わされたパスキーについて伺ってから、パスキーが私たちのIDプラットフォームにどれだけ多くの利益をもたらすかついて、お伝えしたいと思いました。 I. GKIDP でのパスキーの自動入力 パスキーは、パスワードの代替となるもので、ユーザーの端末からより速く、より簡単に、より安全に、ウェブサイトやアプリへサインインすることができます。以下は、ユーザーがワンクリックでパスキー認証を行う方法です。 ![](/assets/blog/authors/pham.hoang/fig1.gif =400x) 図1.KINTO ItalyのIDプラットフォームへパスキーでログインする様子 パスキーの素晴らしいところはシームレスなUXで、パス ワード の自動入力機能と同じです。ユーザーはパスキーとパスワードの複雑な違いを理解する必要はありません。このシステムは、ユーザーが覚えておく必要のあるパスワードなどを使わずに、裏側で非対称暗号化を使用します。FaceID認証だけで、すべての設定が完了します。 パスキーは、2022年後半からAndroidとiOSによってサポートされている、最も安全で最先端の認証システムです。まだ開発中で、現在もアップグレードされ続けています。GKIDP (Global KINTO ID Platform)に最新技術で便利な状態を保つため、2023年7月にパスキーの自動入力機能を導入しました。この導入は、メルカリ、ヤフージャパン、GitHubやMoneyForwardでそれぞれ導入したすぐあとのことです。 次のパートでは、パスキーをFederated Login(連携ログイン)に活用し、GKIDPユーザーが「グローバルログイン」機能をより快適に利用できるようにする方法について説明します。 II. Federated Identityにおけるパスキー Global KINTO ID Platform (GKIDP) は、2024年3月時点でイタリア、ブラジル、タイ、カタールと南米各国に導入されているKINTOサービスの認証システムです。GDPRおよびその他のデータ保護規制に遵守するため、GKIDPは各国ごとに複数のIDプロバイダー(IDP)に分けられており、「コーディネーター」を通してユーザーを一つのグローバルIDとして識別します。グローバルID を活用することで、ユーザーは世界中のKINTOサービスを共通のIDで利用することができます。 図2.GKIDP とパスキー対応のIDP 通常、パスキーでログイン(図1を参照)をすると、ユーザーはローカルIDPを使用して認証連携を行い、自国内のKINTOサービスを利用できます。しかし、私たちの場合、RP(Relying Party)のアプリケーションまたはブラジルの KINTO ONE Personal やその他のKINTOサービスのような「サテライトサービス」でパスキー機能が使えないといけないため、各国のIDP (例:ブラジルIDP)にパスキーを実装しました。 この利点について、私たちが参加した OpenID Summit Tokyo 2024 でも取り上げられており、パスキーをOpenID Connectプロトコルと組み合わせて実装することが推奨されていると知れて良かったです。 さらにGKIDPには、KINTOサービスがある他国にユーザーが旅行や引越しをした際、自国サービスと同様に国外サービスでもKINTOまたは関連サービスにログインできる独自の機能があります。これを私たちは「グローバルログイン」機能と呼んでいます。利用には複数のステップが必要ですが、1つのユーザー名とパスワードで管理できるので、サービスごとにユーザー名とパスワードを覚えなくても良くなります。さらにパスキー実装によって、ログイン情報を覚えたり入力したりする必要なく、簡単な手順でグローバルユーザーのログインプロセスの無駄をなくします。例えば、イタリアのKINTO GOユーザー(図1のユーザー)が、グローバルログインを利用してタイのKINTO SHAREサービスにアクセスする方法を見てみましょう。わずか数回のクリックでログイン時間を平均2~3分から約30秒に短縮することができています(図3)。ローカルIDPがパスキーをサポートしているかどうかに関係なく、1つのパスキーを使用してすべてのKINTOサービスにアクセスできます。 ![](/assets/blog/authors/pham.hoang/fig3.gif =300x) 図3. パスキーによるグローバルログイン パスキーは、ローカルログインとグローバルログインだけでなく、再認証などを含むすべての認証画面にも活用されています。一度パスキーが登録されると、ユーザーは何かを確認するためのパスワードをもはやほとんど必要としません。 III. パスキーとその需要 図4. パスキー登録ユーザー イタリアのIDPでは、875名のユーザーパスキーを利用して登録しており、パスキーリリース後の新規ユーザーの52.2%を占めています。パスキーの自動入力をサポートするOSにアップデートするユーザーが増えるにつれてにつれて、この割合も増えることを期待しています。(iOS 16.0以上、Android 9以上) デスクトップユーザーが多くを占めるKINTO Brazilでは、Microsoft PCでパスキーが広く利用されていないにも関わらず、リリース後の新規登録ユーザー1176人のうち20%以上がパスキーを使用しています。 IV. さいごに KINTOのエンジニアとして、パスワードレスの未来のために新しい技術を導入し、ユーザーのデータ保護を強化できることをとても嬉しく思います。パスキーを活用することで、ユーザーは最高レベルのセキュリティで簡単にログインできるようになりました。これからも、世界中のKINTOサービスを新しく我々のIDPハブ:GKIDPに繋ぐことができるのを楽しみにしています。 Hoang Phamの他の記事はこちら: https://blog.kinto-technologies.com/posts/2022-12-02-load-balancing/
アバター
[[[Amazonへのリンク]]]( https://amzn.asia/d/06GXK0Fd ) ハンス・P・バッハー、サナタン・スルヤヴァンシ共著 『Vision』の内容を忘れないよう備忘録としてまとめようと考えておりましたが、とても良い本なので共有したいと思い、ここにその一部を紹介いたします。 日常に溢れるデザインされたビジュアルは、私たちに様々な感情を呼び起こします。なぜ特定のビジュアルが私たちに強い印象を与えるのか、またその背後にある心理をどのように理解するかを、この本は解き明かしてくれます。 著者はビジュアルを通じてストーリーを語るための具体的な方法、例えば色彩や形の選択が感情にどのように作用するかを教えてくれます。これにより、専門家でなくとも日々の視覚的体験を豊かに解釈できるようになると思います。 『Vision』を読むことで、私達の日常に新たな視点が生まれると思います。このブログを通じて興味を持たれた方は、ぜひ手に取ってみることをお勧めします。 こちらの書籍は以下の内容で構成されています。 序文 はじめに ビジュアルコミニュケーションのプロセスとは 画像の心理学 ライン シェイプ 明度 色 光 カメラ 構図 まとめ 今回はこの中で「ビジュアルコミニュケーションのプロセスとは」「画像の心理学」「ライン」の内容をかいつまんで紹介していこうと思います。 ビジュアルコミニュケーションのプロセスとは ビジュアルコミニュケーションのプロセスとは、目から入ったものが瞬時に様々な感情を引き起こす自動的処理だと著者は言っています。 例えば、「薄暗い路地に伸びる影」「そこで恐怖におののく人」が描かれた映画のポスターを見るだけで、私達はその映画が不安や恐怖をテーマにしているのだと直感的に認識します。この瞬間的な感情の反応は自動的に引き起こされているものなのだということです。 この本の目的は、読者がこのような自動的処理をプロセスや要素に分解し、なぜそういった気持ちが引き起こされるのかを理解できる力をつけることだと述べられており、早速次の章ではこの自動処理を心理的側面から説明してくれます。 画像の心理学 画像を見て、リラックスしたり恐怖を感じたりするのはなぜか。このプロセスを説明するにあたり画像が及ぼす心理学的側面の三要素について言及しています。 ①関連付け ②メカニズム ③響くとき ①関連付け 例えば、薄暗い路地裏と暗い影が組み合わさると、恐怖を感じることが一般的です。このように、画像や映像は私たちの過去の記憶にリンクしており、脳はこれらを見ると自動的に特定の感情を想起させるそうです。これは「連想」のプロセスに似ています。 したがって、適切なビジュアル要素を選択し関連付けることで、作品は見る人に強烈な印象を与えることができまのだといいます。 ②メカニズム 視覚デザインにおいて、ライン、シェイプ、色といった要素の組み合わせは重要な役割を果たします。例えば対立色※1が隣り合わせに配置されると対比が生じて刺激を生み出します。この様に視覚要素が相互作用して時に刺激や調和を生じさせるということです。 ③響くとき 「言わんとすることが「響く」のは伝えようとする内容とその伝え方が一致したときだ。」(引用 p20) 例えば大切な人の悲痛なる死を語る場面にポップなカラーリングを使用した場合、その悲しみは伝わりづらくなるといった具合に、内容と伝え方が一致してないものは見ている人の心に響かないということです。 こうした色彩などのデザイン要素を積極的に組み合わせることで、『絵』の魅力が向上すると、著者は強調しています。さらに、そうした要素を「偶然」や「あるがまま」に任せるべきではなく、意図的に選択することによって見る人の感情に訴えかけるべきだと述べています。 画像のアナトミー アナトミーとは「解剖学」のことです。 以下に列挙した項目を使って「絵」を分解していくことで「見方」を構築していくことが可能になるといい、それがビジュアルでストーリーを語るための基本だと著者は述べています。そしていつでも見返すことが出来るようにしておくことをお勧めしています。 被写体 文字通り被写体のこと。 フォーマット: 画像の縦横比。 向き: 縦長もしくは横長。 フレーミング: 構図内の配置。 ライン: 線状の要素。 シェイプ: フレーム内の形状。 明度(バリュー): 明るさまたは暗さの度合い。 色: 文字通り色のこと。 パターン: デザインまたは繰り返しの要素。 シルエット: デザイン要素の輪郭内を黒く塗りつぶしたもの。 テクスチャ: デザイン要素の輪郭を示す情報。 光: 明るく輝く要素。 奥行き: 空間の感覚。 エッジ: シェイプを隔てる境界の強弱。 動き: すべての動く要素。 ライン ラインは「構図線」、「コンポジショナルライン」と呼ばれ、視線がたどる経路を作り出します。基本すぎて軽視されがちですが、多様な側面を持ち様々な演出を可能にする力を持つと著者は述べています。下図は主なラインの例の図解(一部抜粋)となっています。 フレームの境界線。1~4 全てに該当。どの構図にも必ず存在する上下左右の枠線のことです。 1・2:構図内の人物が、その方向に応じて構図線になっています。 3:オブジェクトの実際の動きおよび暗示された動きが、明確なラインを形成しています。 4:暗い塊が、構図線になっています。 ラインの方向 ラインの方向とはフレーム上下左右の枠線に対するラインの位置関係のことです。ラインの方向で感情を表現することが可能で、適切なモチーフと組み合わせることで豊かな感情を表すことができます。 例) 垂直:重力に抗う強さ、気品(木や建物など頭上高くそびえるもの) 斜め:水平垂直に対するコントラストにより、ドラマ、エネルギー、ダイナミックさ(崩れたバランスと動感) 水平:穏やか、静か (水平線、海、開けた場所) ラインの配置 ラインの配置によってフレームが分割され、シェイプが生み出されます。そのシェイプのバランスによって構図の魅力が変化します。 均等分割、左右対称:非自然的、人工的。 非対称:バランス次第で魅力的になる。三分割、黄金比など。 ラインの質 ラインの質や特徴は感情を強く喚起します。 直線:緊張感 曲線:ソフト感 太線:力強さや頑丈さ 極細線:洗練、繊細さ 調和と対比 フレーム内にラインを描いた途端に、調和か対比が生み出されます。つまりライン同士の関係がリズム、調和、不調和、バランス、アンバランス、統一などを生み出すということです。 例えば下端に水平なラインは調和を生み出すが、それを斜めにすることによって途端に対比が生じることになる。しかし調和も対比も行き過ぎると退屈さや煩雑さにつながるのでバランスには注意が必要だということです。 リズム ラインを繰り返すことによってリズムが生じ、構図に新たな側面が加わります。 一定間隔で規則的なライン:整然さ、(退屈さ) ランダムな繰り返し:エネルギッシュ、緊張感 【まとめ】 適切に関連付けされたデザイン要素を使用することにより上手くメカニズムが働き見る人の心に響くビジュアルになる。たとえシンプルなラインという要素であっても感情や緊張感、退屈さ、調和、対比といった演出が可能だということです。 さらに著者が繰り返しているのは、「ディテールにとらわれず、単純化して考える。」ということです。それを繰り返すうちに構図作りに対する理解が深まり、自分なりに応用を利かすことができるようになるはずだ、と述べています。 以上、序盤を一部をご紹介するという形で書かせていただきました。ご紹介した部分だけでもビジュアルの分析について視野が広がると感じていただけるのではないでしょうか。 また機会がありましたら他の章もご紹介できたらと思います。
アバター
はじめに こんにちは!iOSエンジニアのViacheslav Voronaです。チームメンバーと一緒に今年開催のtry! Swift Tokyoに参加したことで、Swiftコミュニティ全体の動向について考えることができました。かなり新しいものもあれば、前々からあったけれど最近になって進化したものもあり、本記事では私の所感を皆さんにお伝えします。 見て見ぬふりはできない話題... まずは避けて通れないこの話題から。待望のApple Vision Proが発売されたのは、try! Swift開催のおよそ2ヵ月前でした。try! Swiftの会場がAppleファンで溢れていたのにも納得いきます。Apple Vision Proをまだ試着したことの無い人たちは、「数分だけでも装着してみたい!」と、そのチャンスを切望していました。 Satoshi Hattori氏による「SwiftでvisionOSのアプリをつくろう」の会場は満席でした。アプリ自体は、ユーザーの仮想空間に 円形のタイマー を浮かべるだけのシンプルなものでしたが、服部さんが実際にヘッドセットを装着し、リアルタイムでワークの結果を見せ始めると、会場は大きく盛り上がりました。 また、本カンファレンスの2日目には空間コンピューティングのファンたちが小さな非公式ミーティングを開いていました。Appleの他のデバイスとは異なり、Vision ProはSwiftコミュニティ内で、独自のサブコミュニティを形成しています。映画で近未来的な仮想デバイスを見て育った人たちは、サイバーパンクの夢に近づいていることを実感し始めているのです。エキサイティングである反面、人によっては脅威に感じるかもしれません。 そしてもちろん、カンファレンスのオープニングでの「Swift Punk」のパフォーマンスもVision Proにインスパイアされたものだということは忘れずに触れておきます。 $10000+の小道具で行われたオープニングパフォーマンス Swiftの新境地 最先端のトレンドではなくても、最近は多方面において興味深い開発が進められています。つまり、Swiftコミュニティが、Appleデバイスの領域を超えてさらに拡大しようとしているということです。 サーバーサイドSwiftなどは以前から存在しています。 Vapor は2016年にリリースされ、広く採用されたわけではないですが、今も稼働し続けています。Vapor Core Teamの Tim Condon 氏により、try! Swiftで大規模なコードベースの移行について大変興味深いプレゼンを聞くことができました。これはVaporがversion5.0でSwift Concurrencyを完全にサポートするために現在進めている移行に大きく影響されています。Tim氏によると、そのバージョンは2024年夏にリリースされる予定なので、サーバーサイドSwiftを試してみたい方にとっては始めるのに絶好のタイミングかもしれません。 Vaporの仕掛け人、Tim Condon氏。シャツが良い感じ! Swiftで書かれたAPIに合わせて、同じSwift言語を使ってWebページを実装してみることもできます。これは Paul Hudson 氏のトークテーマでした。Swiftリザルトビルダーを利用したHTML生成に関するPaul氏の講演は、経験豊かな彼だからこそできるもので、とてもおもしろかったです。スピーチのクライマックスは、彼がスピーチで話していたのとまったく同じ原理を使った新しいサイトビルダー、 Ignite の発表でした。 Paul Hudson氏: Igniteも含め多くのものを裏で支えている仕掛け人 このカテゴリーでもう一つ印象的だったのは、クロスプラットフォームSwiftをこよなく愛する Saleem Abdulrasool 氏によるもので、WindowsとmacOSの違いと類似点、そしてSwift開発者がWindowsアプリケーションを作ろうとする際に直面する課題について話してくれました。 最後に忘れてはいけないのが、 Yuta Saito 氏によるSwiftのバイナリ削減ストラテジーについてです。一見、私が本記事で書いているトレンドとは関係無いように見えますが、齋藤さんが Playdate という小さなゲーム機にデプロイされたシンプルなSwiftアプリを見せたときに、無関係ではないことに気づきました。感動的でした。 SwiftがAppleのプラットフォームで新しい機能を得るだけでなく、新しい領域も絶えず探求しているのは喜ばしいことです。 "ザ・コンピューター (パラノイア)" 最後に、ここ数年あちこちで話題となり、新しい「なによりも強力な」モデルが次々と出てくるAIやLLMなどのトピックについてお話します。デジタル・ゴールドラッシュの昨今、ソフトウェア企業はAI処理をありとあらゆるものに適用しようとしています。もちろん、Swiftコミュニティもその影響を受けずにはいられません。try! Swiftでも、この傾向が随所に見られました。 カンファレンスで最初に行われたプレゼンの一つは、Duolingoのエンジニアである Xingyu Wang 氏によるものでした。OpenAIと共同で導入したロールプレイ機能について、AIを搭載したバックエンドの活用、AI生成にかかる時間を最適化するための挑戦、そしてそれを軽減するためにXingyu氏のチームが適用したソリューションについて語られました。全体的に前向きで、AIが秘める無限の可能性を明るいイメージで描かれていたのを覚えています。 一方で、カンファレンスの前に私が注目したのは、 Emad Ghorbaninia 氏による "AIがない未来を考える / What Can We Do Without AI in the Future?"のセッションです。どんな内容なのか、とても興味を持っていました。実際に聴講して、AIのさらなる発展に伴い、開発者として、そして人間として、私たちが今後直面するであろう課題について深く考えさせられました。Emad氏の考えによると人工知能に対抗するためには、人間が最もその強みを出せる創造的なプロセスに焦点を当てるべき、とのことでした。反論できません。 さいごに try! Swift Tokyoでのディスカッションをふり返り、Swiftコミュニティの進化や最新の技術動向に適応していっている様子は非常に興味深いです。Apple Vision Proのような革新的なハードウェアを取り入れることから、サーバーサイドSwiftやAIの統合といった新たな領域の開拓まで、今回見えた進展は技術の動向に広く敏感に対応するコミュニティの姿勢を浮き彫りにしています。この好奇心とイノベーションへの情熱が、SwiftをiOS開発に限定された言語ではなく、ソフトウェアの可能性を広げるための強力なツールセットにしています。今後も、開発者の創造性と技術のダイナミックな相互作用はSwiftコミュニティ内でさらにエキサイティングな進歩をもたらすことが期待されます。この活気に満ちたエコシステムの一員となれることは非常に楽しみです!
アバター