TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

969

はじめに こんにちは。 KINTOテクノロジーズ モバイルアプリ開発グループの中口です。 iOSチームのチームリーダーとしてこれまでにチームビルディングに関する記事を公開しておりますので、ご興味あればぜひご一読ください。 振り返り会がマンネリ化したのでプロファシリテーターを呼んでみた 180度フィードバックとってもおすすめです! 先日、 【たぶん世界最速開催!『アジャイルチームによる目標づくりガイドブック』ABD読書会】 こちらのイベントに参加してきました。 このイベントの参加目的は、主に以下の3点です。 アクティブ・ブック・ダイアローグ®(以下「ABD」という。)を体験してみたかった。 イベントで扱う本である 「アジャイルチームによる目標づくりガイドブック」 に興味があった。 著者である「小田中 育生(おだなか いくお)」さんにお会いしてみたかった。 その中でも、ABDという読書法は初めての経験で非常ためになるものでした。この読書法をもっと多くの方に知ってもらいたいと思ったので本記事ではABDに関する内容を中心に紹介させていただきます。 諸注意 本記事で掲載する人物や資料は、全て開催者様及びご本人様より掲載許可をいただいております。 イベントについて こちらのイベントは2024/07/10(水)に開催されたイベントで、『アジャイルチームによる目標づくりガイドブック』を「刊行前に著者と会えるABD読書会」として開催されました。 募集ページが公開され、その日中に応募枠の15名を突破してしまう人気イベントでして、参加できたことが非常に幸運だったと思います。 イベントのことを紹介してくれた弊社コーポレートITグループの きんちゃん にはとても感謝です!! 本について 本の内容については、実際に読んでもらえればと思いますのでここでは多くは語りませんが、イベントのオープニングでいくおさんが紹介されていた内容を共有させていただきます。 世の中的に目標設定があまり好まれていない傾向があるように感じる。 しかし、みんなが真剣に目標に向き合いそれを達成できるようになれば世界は良くなっていくと思う。 だから、いい目標を作れることはとても大事である。 一方で、目標を作ることも大事だが、それをいかに達成していくかはもっと大事である。 この本では、目標作りに関しては初めの2割程度で、 残りは目標を達成する方法をアジャイルの要素を取り入れつつ紹介する本となっている。 また、目標とセットで語られることの多い人事評価については書いていないが、 8名の方にコラムを書いていただいており、その中で評価の部分も良い感じに補完されているので、 コラムもぜひ読んでほしい! いくおさんによるオープニングの様子 いくおさんについて いくおさんとは、これまで面識は無いのですが、下記のLTや記事を拝見して存じておりました。 『Keeper of the Seven Keys Four Keysとあと3つ 』 こんなエンジニアリングマネージャだから仕事がしやすいんだなぁと思う10個のこと 誇り高き「マネージャー」を全うするために。“理想のEM”小田中氏を支えた珠玉の5冊 開発生産性やエンジニアリングマネージャーに関する考え方、及び読書に対する向き合い方など、とても参考になる部分が多く、ぜひ一度お会いしてお話ししてみたいと思っていました。 しかし当日は簡単な挨拶はさせていただいたものの、しっかりとお話できる時間を作ることができませんでした。 非常に残念でしたが、今後の機会に期待したいと思います。 ABDについて こちら ABDの公式サイト より引用させていただきます。 ABDとは何か? 開発者:竹ノ内 壮太郎さんによる説明 ABDは、読書が苦手な人も、本が大好きな人も、 短時間で読みたい本を読むことができる全く新しい読書手法です。 1冊の本を分担して読んでまとめる、発表・共有化する、気づきを深める対話をするというプロセスを通して、 著者の伝えようとすることを深く理解でき、能動的な気づきや学びが得られます。 またグループでの読書と対話によって、一人一人の能動的な読書体験を掛け合わせることで学びはさらに深まり、 新たな関係性が育まれてくる可能性も広がります。 ABDという、一人一人が内発的動機に基づいた読書を通して、 より良いステップを踏んでいくことを切に願っております。 流れ コ・サマライズ 本を持ちよるか1冊の本を裁断し、担当パートでわりふり、各自でパートごとに読み、要約を作ります。 リレー・プレゼン リレー形式で各自が要約文をプレゼンします。 ダイアログ 問いを立てて、感想や疑問について話しあい、深めます。 ABDの魅力 短時間で読書が可能 短時間で読書ができて、著者の想いや内容を深く理解できるので、本を積ん読している方にはピッタリです。 サマリーが残る アクティブ・ブック・ダイアローグ®後にサマリーが残るので、見直して復習したり、本を読んでいない人にも要点を伝えやすくなります。 記憶の定着率の高さ 発表を意識してインプットしてまとめた後、すぐにアウトプットをして意見交換をするので、深く記憶に定着します。 深い気づきと創発 多様な人どうし、それぞれの疑問や感想をもって対話することで、深い学びの創発が生まれます。 個人の多面的成長 集中力、要約力、発表力、コミュニケーション力、対話力など、今の時代に必要なリーダーシップを同時に磨けます。 共通言語が生まれる 同じメンバーで行うことで、同じレベルの知識を共有できるため、共通言語を作ることができます。 コミュニティ作り 本が1冊あれば仲間との対話や場を作れるので、気軽なコミュニティ作りに最適です。 何より楽しい! 本を読んで感動したり学んだ熱量をその場ですぐに共有できるので、豊かな学びが生まれ、何より読書が楽しくなります。 個人的には「1. 短時間で読書が可能」、「6. 共通言語が生まれる」、「7. コミュニティ作り」、「8. 何より楽しい!」が価値が高いなと感じました。 当日の様子 本が裁断され15パート分に分かれています。 こんな光景始めみました!笑 裁断された本 コ・サマライズ(20分) 各自が担当パートを読み、要約を作成します。 20分で本を読みA4用紙3枚にまとめるのですが、これがなかなか難しかったです。。。 時間に追われすぎていて撮影を忘れてしまいました。 リレー・プレゼン(1分30秒/人×15名) 各自が要約したものを、壁に貼り付けます。 みなさんが要約した資料 そして要約したものを1分30秒で発表します。 みなさん、要約もプレゼンもとても素晴らしかったです。 写真は私のプレゼンの様子です。1分30秒というすごく短い時間だったことと緊張で、何を話したか全く覚えていません。。。 私の発表の様子 ダイアログ(25分) ここでは、プレゼンの中から3つのパートを参加者でピックアップし、各グループに分かれて深掘りを行いました。 私はその中で「助け合えるチームになろう」のグループに参加させていただきました。 グループによる深掘りの様子 グループ内にはスクラムマスターやエンジニアリングマネージャーをされている方もおり、様々な意見交換をさせていただきました。 その中でも、「好きなこと」は、得意(十八番)/苦手(成長機会)関わらず伸ばしていくべきなので、好きなことに挑戦できるチーム作りをしたいね、という話題が印象的でした。 ABDを通して本から学んだこと 私自身はこれまで、目標管理として「OKR」(Objectives and Key Results)を用いたことがなかったのですが、OKRに関する理解が進みました。 また、目標づくりにおいては、いかに内発的動機によってチームとして目標を立てることが重要かを学びました。 そのためにも、トップダウンによる目標設定を行うのではなく、チーム間で議論を行なった上での目標づくりが鍵となることが印象的でした。 また、重要なのは「目標の達成」であり、「タスクの消化」ではないということも印象に残っております。 そのため「時には優先順位が低いタスクを捨てる勇気が必要である」という考えは、これまでの自分には無い考え方でした。 そして、目標達成のために「時間が無い」ということがあるかと思いますが、それを 本当に時間が無い 時間をかけて良いか分からない 意欲が湧かない というように分解されているのも初めて聞きました。 「本当に時間が無い」というのはイメージしやすいのですが、「時間をかけて良いか分からない」、「意欲が湧かない」というのは初めて聞きましたが、経験的に納得感がありました。 こちらに関しては、本に解決法なども記載されていたのであたらめて本を読んで復習したいです。 感想 初めてABDを体験いたしましたが、刺激的でとても楽しかったです。 当日参加されていたメンバーが、題材の本に興味がある方ばかりだったので、プレゼンやダイアログにおいても建設的な場であり学びも多かったです。 弊社でもABDを実践してみたいと思ったので、興味があるメンバーを募ってやってみたいな、と考えています。 一方で、下記に挙げるような理由から運用の難易度はかなり高いのではないかと思いました。 限られた時間内で進行する必要があるためファシリテーターのスキルが求められる。 コ・サマライズが難しく、参加者により要約やプレゼンのレベルに差が生じてしまいそう。 題材とする本の選定や、メンバー集めが難しそう。 私はこれまで何度か輪読会に参加したことがあるのですが、「長期間の催しから生じる継続の負担」、「(輪読会の形式によりますが)個人の作業負担」など、実際に行うには少々ハードルが高い読書法だと感じていました。 一方でABDは、短時間で一気に終了できるので輪読会で感じていたようなデメリットを解消できるとても良い読書法だと思います。 ただし、短時間が故に本の理解度が下がってしまうという、トレードオフは生じてしまうかと思います。 「題材とする本の選定」や「参加メンバーとの事前協議」をしっかり行なった上でどのような読書法が良いのかは検討の必要があると思いました。
アバター
Sharing How Great Was Our Group Reading Session " Learning from GitLab: How to Create the World's Most Advanced Remote Organization ". Hello, I am Awache ( @_awache ). We were so fascinated by the book " {2 Learning from GitLab: How to Create the World's Most Advanced Remote Organization - How to use documents to achieve maximum results without an office ]( https://www.amazon.co.jp/dp/4798179426 ) " that we decided to hold a group reading session with both people from the company and from outside. In this article, I'd like to share our efforts with you. But first, let me announce our next get together: We will be hosting the ‘finale’ of the group reading session for " Learning from GitLab: How to Create the World's Most Advanced Remote Organization " A bit sudden maybe but it’s important. You can see the details below: Connpass: Grand Finale of the group reading session for " Learning from GitLab: How to Create the World's Most Advanced Remote Organization " Date and time: 18:00 - 21:00 (Opening at 17:40) Thursday, April 25, 2024 Event Type: Offline Venue: Muromachi Office, KINTO Technologies Corporation This event is intended for those who have read the book and participated in the previous group reading sessions of " Learning from GitLab: How to Create the World's Most Advanced Remote Organization - How to use documents to achieve maximum results without an office ", but it is also open to those who are currently reading it or plan to do so in the future. We will discuss how the group reading session was conducted in each of the companies, how was the reception, and gather insights from the book, aiming to create an open forum for all participants to engage in the discussions. There are still spots available, so if you are interested, please join us! We'd like to find ways for everyone to enjoy the session casually. (Though the irony is not lost on me that we will be meeting in person to discuss remote organizations lol) Common Challenges at Group Reading Sessions Ensuring continuous participation Regular gatherings are necessary to complete reading a book in a group setting. By dividing the book into reasonable portions and meeting once a week, it would take about 2 to 3 months. Dropping out in the middle It is always challenging to continue anything for long periods of time, so it’s natural that one by one the number of participants dwindle as the meetings progress. If we fail to keep the participants motivated, we may end up with a very lonely group reading session. Difficulty of joining in the middle Given the nature of book readings, the hurdle for participation typically rises in the middle stages. As a result, the number of participants is likely to decrease, with opportunities to increase the number of participants being very rare. Sustaining leadership Various burdens for the leader Pre-preparations Securing time for participants Facilitation These are not one-offs, but continues until the end of the book. It takes a lot of motivation to continue doing things alone Recognizing differences in reading speed and understanding among participants The speed and comprehension level of reading varies depending on the participant. Without recognizing this, the group reading session will end up with a lot of tedious time without much discussion. So taking all the above into account, it is quite challenging to finish a book in a group setting, isn't it? I myself have many times dropped out in the middle a book or couldn't read it until the end. However, this time, I really wanted to share the ideas of this book widely within the company and I was strongly motivated to finish it until the end . So, I was trying to figure out how to solve these issues. For example, I hypothesized that we could approach the issue effectively by creating an opportunity to conduct group reading sessions on the same subject in different ways at multiple locations beyond the boundaries of the company, not only by ourselves but involving others, and share our findings at the end of the session. However, there are limits to what I can do alone. I consulted with @soudai1025 -san, who is a technical advisor in our DBRE team, and with his cooperation, we decided to hold “A Sodai-Naru (Great) Group Reading Session.” Organizing The Reading Session Several companies: Using a kickoff meeting as a trigger Will set a time of about three months to hold a series of group reading sessions internally, After which findings will be shared together at the end We decided to divide our event in the above three-stages. You can check out our kickoff session on YouTube: https://www.youtube.com/watch?v=IBgmGtpW15Q How to Start a Group Reading Session I will briefly describe what I prepared for the group reading session after the kickoff was over. Gathering team members First of all, we gathered willing participants through internal open channels. We reached out to people interested in group reading sessions and waited for volunteers to raise their hands. As a result, we got 14 people interested! Transcription (Transcribing a book) I was determined to transcribe the book from the moment I decided to lead the group reading session. I think that the action of transcribing, which enables reading, writing, and reviewing simultaneously, is an excellent activity for quickly understanding the content of a book. However, this book is over 300 pages, so one needs determination! lol Purchasing books in bulk It is mentioned in our career website as well, but KINTO Technologies allows you to purchase the books you need. Since several of the 14 people did not have it yet, we used this program and purchased the books in bulk for those who didn’t. Thinking how to proceed with our in-house group reading session I seriously considered how we could ensure that everyone enjoys the session without feeling any pressure whenever they attend. I will introduce the specific actions later. While I was pondering this, I realized that time was passing by fast and our in-house group reading session was set to February. The In-house Group Reading Session Kickoff Working Agreement I shared with the participants a summary of what kind of atmosphere I would like to create. Here are the details: This session is designed with the aim of minimizing pressure on the participants: Follow up with each other even if someone did not read the book For the first 10 minutes, we'll have quiet reading time The main focus is on discussion , and the output is made public to create an atmosphere in which even those who are not actually participating can join in during the process. Summarize every output and make it available to everyone Record the session via Zoom and publish it (whenever possible) The same content can be read multiple times Do not interfere when other participants are speaking Be respectful and accepting of what participants say Stimulate free discussion Conduct discussions in breakout rooms of up to 4 people Conduct discussions in small groups reduces the psychological barrier to speaking up and allows each person to bring up what they want to discuss Do not refuse participation from ROMs (Read Only Members) When participating, communicate each ones’ situation to the rest of the participants to create an accepting atmosphere. Things like: I won’t be able to talk today Due to where I am working from today, I may not be able to talk much Everyone actively creates output Minutes of discussions are actively logged by those who are available (e.g., those who can’t speak that day) How we proceeded with the group reading session I still believe that a certain timeline is desirable for ongoing discussions. Even if you are a little late but want to join the group reading session, it may be psychologically difficult to do so if the session is in the middle of a heated discussion. On the other hand, if you have some idea of what you are doing, for example, you may be able to join in the middle of the session because it is now quiet reading time. That's why I decided to create a clear structure for us. Basic Format Quiet reading time (10 minutes) Discussion time (30 minutes) Content sharing (20 minutes) Content of the Discussions What I could relate to What I could not relate to What I want to put into practice at KINTO Technologies Perhaps the results of what was put into practice could be shared in the next session Discussion content output The agenda during the discussion is described in Google Slide After the discussion, everyone shares the topics that came up Selection of tools to use Gather Gather was chosen as the web meeting tool. Since our main focus was discussion, the idea was to engage with individuals who were comfortable talking with those attending. With Zoom, you have to make a breakout room every time, and it's hard to sort them out. Gather, a virtual office space perfectly suited our needs to gather everyone together and later move to small rooms for discussion. However, it is not suitable for sharing recordings, so we gave up on that. Instead, we made sure to keep logs so that we can review them later. Microsoft Loop Loop was chosen as our collaboration tool. KINTO Technologies has been using Confluence for the most part, but it has had some weaknesses when it came to collaborative editing, with several participants writing notes freely. We decided on Loop because the experience is not so different from Confluence, but it is less stressful. Setting Up Additional Meetings The time for our group reading session was set for every Tuesday from 18:00 to 19:00. It was a bit late, and it might overlap with prime time for those who have unexpected business schedules or for those with children. If you miss participation even once, the psychological hurdle to rejoining becomes higher. So, I decided to hold exactly the same content on the following Wednesday but from 12:00 to 13:00. This reduced the risk of missed participation, and the participants who attended the day before to have time to understand the content in more depth. Moreover, listening to other participants' perspectives provided them with new insights, making each session more enjoyable. Leveraging Generative AI As I mentioned in the Working Agreement above, I had a strong desire to create a place where people can still follow up each other without having to read the book. Although there is a quiet reading time in the first 10 minutes, it is rather challenging to read the required amount within 10 minutes. The strong allies that helped us were transcribing and ChatGPT. By summarizing the transcribed text by ChatGPT as much as needed, we realized even 10 minutes of quiet reading time can make a big difference in the quality of participants' input. For example, here is a summary of the first part. Don't you think that silent reading time would be effective when you can condense about 12 pages into this amount? ![AI Brief Summary](/assets/blog/authors/_awache/20240422/AIざっくり要約.png =750x) The original text is also available in Confluence, so if you find something you are interested in the summary, you can search for keywords to quickly find the point. Personally, this was such an important factor that I believe it was the main reason we were able to make it till the end. In-house Group Reading Session ![Group Reading Session](/assets/blog/authors/_awache/20240422/gather.png =750x) As a result, a total of 17 group reading sessions were held. I was able to make it through all 17 without being left alone until the end lol. Some people participated fully, while others came whenever they could. Despite variations in the number of participants per session due to some sessions being repeated, I found the number to be quite good in terms of participation per chapter. Part 1: Understanding the benefits of remote organizations / Part 2: Process to parallel the world's most advanced remote organization February 13, 2024 (6 participants) February 14, 2024 (9 participants) Chapter 5: Culture is fostered by value February 21, 2024 (8 participants) February 27, 2024 (4 participants) February 28, 2024 (4 participants) Chapter 6: Rules of communication March 5, 2024 (7 participants) March 6, 2024 (5 participants) Chapter 7: The importance of onboarding in remote organizations / Chapter 8: Fostering psychological safety March 13, 2024 (7 participants) March 19, 2024 (5 participants) Chapter 9: Bringing out individual performance / Chapter 10: Human resource system based on GitHub Value March 26, 2024 (7 participants) March 27, 2024 (5 participants) Chapter 11: Managerial roles and mechanisms to support management & Chapter 12: Achieving conditioning April 2, 2024 (6 participants) April 3, 2024 (7 participants) Chapter 13: Using L&D to improve performance and engagement & Conclusion April 9, 2024 (7 participants) April 10, 2024 (5 participants) Wrap up! April 16, 2024 (5 participants) April 17, 2024 (4 participants) To learn more about what was discussed, please join us for the Finale of the group reading session for " Learning from GitLab: How to Create the World's Most Advanced Remote Organization "! Ultimately, this book contains extensive information on what we should be aiming for, and there were intense discussions about how it may collide with reality, which is a little difficult to write about. So, please let me talk about this topic on another day. What we gained and produced through this group reading session Connections I was able to learn about the thoughts of those who participated through this group reading session, and I will continue to cherish these connections as I work to make KINTO Technologies a more exciting place to work. We have a channel called #thanks where we can openly express our gratitude towards each other. I was also very happy to receive warm messages from the participants on the final day of the group reading session. ! [thanks] (/assets/blog/authors/_awache/20240422/thanks.png = 750x) Transcription I feel that transcribing is an important process if I want to continue to lead group reading session in the future, as it allowed me to respond to the AI summary and various other topics that came up during the discussions. AI Summary Summaries output using generative AI are really powerful. You may forget where and what was written over time, but if you have a summary, a quick 10-minute look can recall your memory. Mandala Chart In my own way, I summarized the key points of this book in a mandala chart template. Of course, it is impossible to do everything, so I would like to set points and themes and increase what I can do little by little. Conclusion How was the group reading session for " " Learning from GitLab: How to Create the World's Most Advanced Remote Organization - How to use documents to achieve maximum results without an office "? I was personally more satisfied with this session than any other I have held before, which is why I felt compelled to share it on our Tech Blog. In truth, there is much more I would like to write, but it would be too long, so I will stop here for now. Reminder: We will be hosting the ‘finale’ of the group reading session for " Learning from GitLab: How to Create the World's Most Advanced Remote Organization " There are still spots available. We will do our best to make it enjoyable session as well, so if you are willing to come, please apply! Thank you very much. Cnnpass: Grand Finale of the group reading session for " Learning from GitLab: How to Create the World's Most Advanced Remote Organization " Date and time: 18:00 - 21:00 (Opening at 17:40) Thursday, April 25, 2024 Event Type: Offline Venue: Muromachi Office, KINTO Technologies Corporation This event is intended for those who have already read the book and participated in the previous group reading sessions of " Learning from GitLab: How to Create the World's Most Advanced Remote Organization - How to use documents to achieve maximum results without an office ", but it is also open to those who are currently reading it or plan to do so in the future. We look forward to seeing you at the event! See you!
アバター
Hi! I’m Ryomm, developing the iOS app my route at KINTO Technologies. My fellow developers, Hosaka-san and Chang-san, along with another business partner and I, successfully implemented and integrated our Snapshot Testing. Introduction Currently, the my route app team is moving towards transitioning to SwiftUI, so we have decided to implement Snapshot Testing as a foundational step. We began this transition by initially replacing only the content, while keeping UIViewController as the base. This approach ensures that the implemented Snapshot Testing will be directly applicable. Let me introduce the techniques and trial-and-error methods we used to apply Snapshot Testing to an app built with UIKit. What is Snapshot Testing? It is a type of testing that verifies whether there are any differences between screenshots taken before and after code modifications. We use the Point-Free library for modifications https://github.com/pointfreeco/swift-snapshot-testing . While developing my route , we extend XCTestCase to create a method that wraps assertSnapshots as follows: We determined the threshold to be at 98.5% after various trials to ensure that very fine tolerance variances were accommodated successfully. extension XCTestCase { var precision: Float { 0.985 } func testSnapshot(vc: UIViewController, record: Bool = false, file: StaticString, function: String, line: UInt) { assert(UIDevice.current.name == "iPhone 15", "Please run the test by iPhone 15") // SnapshotConfig is an enum that specifies the list of devices to be tested SnapshotConfig.allCases.forEach { assertSnapshots(matching: vc, as: [.image(on: $0.viewImageConfig, precision: precision)], record: record, file: file, testName: function + $0.rawValue, line: line) } } } The Snapshot Testing for each screen is written as follows. final class SampleVCTests: XCTestCase { // snapshot test whether it is in recording mode or not var record = false func testViewController() throws { let SampleVC = SampleVC(coder: coder) let navi = UINavigationController(rootViewController: SampleVC) navi.modalPresentationStyle = .fullScreen // This is where the lifecycle methods are called UIApplication.shared.rootViewController = navi // The lifecycle methods starting from viewDidLoad are invoked for each test device testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } } Tips We need to wait for the data fetched by the API to be reflected in the View after the viewWillAppear method and subsequent methods. To ensure the Snapshot Testing run after the API data is reflected in View, we have encountered issues where the tests execute too early, causing problems like the indicator still being visible. Since it is difficult to determine if the data from the API call has been reflected in the view, we will implement a delegate to handle this verification. protocol BaseViewControllerDelegate: AnyObject { func viewDidDraw() } In the ViewController class, create a delegate property that conforms to the previously prepared delegate. If no delegate is specified during initialization, this property defaults to nil. class SampleVC: BaseViewController { // ... weak var baseDelegate: BaseViewControllerDelegate? // .... init(baseDelegate: BaseViewControllerDelegate? = nil) { self.baseDelegate = baseDelegate super.init(nibName: nil, bundle: nil) } // ... } When calling the API and updating the view, for example, after receiving the results with Combine and reflecting them on the screen, call baseDelegate.viewDidDraw() . This notifies the Snapshot Testing that the view has been successfully updated with the data. someAPIResult.receive(on: DispatchQueue.main) .sink(receiveValue: { [weak self] result in guard let self else { return } switch result { case .success(let item): self.hideIndicator() self.updateView(with: item) // Timing of data reflection completion self.baseDelegate?.viewDidDraw() case .failure(let error): self.hideIndicator() self.showError(error: error) } }) .store(in: &cancellables) As we want to wait for baseDelegate.viewDidDraw() to be executed, we add XCTestExpectation to the Snapshot Testing. final class SampleVCTests: XCTestCase { var record = false var expectation: XCTestExpectation! func testViewController() throws { let SampleVC = SampleVC(coder: coder, baseDelegate: self) let navi = UINavigationController(rootViewController: SampleVC) navi.modalPresentationStyle = .fullScreen UIApplication.shared.rootViewController = navi expectation = expectation(description: "callSomeAPI finished") wait(for: [expectation], timeout: 5.0) viewController.baseViewControllerDelegate = nil testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } func viewDidDraw() { expectation.fulfill() } } When there are multiple sets of data to be retrieved from the API that need to be reflected (when calling baseDelegate.viewDidDraw() in multiple places), you can specify expectedFulfillmentCount or assertForOverFulfill . final class SampleVCTests: XCTestCase { var record = false var expectation: XCTestExpectation! func testViewController() throws { let SampleVC = SampleVC(coder: coder, baseDelegate: self) let navi = UINavigationController(rootViewController: SampleVC) navi.modalPresentationStyle = .fullScreen UIApplication.shared.rootViewController = navi expectation = expectation(description: "callSomeAPI finished") // When viewDidDraw() is called twice expectation.expectedFulfillmentCount = 2 // When viewDidDraw() is called more times than specified, any additional calls should be ignored expectation.assertForOverFulfill = false wait(for: [expectation], timeout: 5.0) viewController.baseViewControllerDelegate = nil testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } func viewDidDraw() { expectation.fulfill() } } If the baseViewControllerDelegate from the previous screen remains active, running the Snapshot Testing across all screens will call viewDidLoad and subsequent lifecycle methods for each test device every time testSnapshot() is invoked. This causes the API to be called multiple times and viewDidDraw() to be executed repeatedly, resulting in multiple calls error. Therefore, we clear the baseViewControllerDelegate after calling wait() . Frame misalignment on devices While Snapshot Testing can generate snapshots for multiple devices, we encountered issues where the layout and size of elements were misaligned on some devices. Misaligned This issue is caused by the lifecycle of the Snapshot Testing execution. In a Snapshot Testing, it starts loading on one device, and then other devices are rendered by changing the size without reloading. This means that viewDidLoad() is executed only once at the beginning, and for the other devices, it starts from viewWillAppear() . As a solution, create a MockViewController that wraps the viewcontroller you want to test. Override viewWillAppear() to call the methods that are originally called in viewDidLoad() . import XCTest @testable import App final class SampleVCTests: XCTestCase { // snapshot test whether it is in recording mode or not var record = false func testViewController() throws { // Write it the same way as when calling the screen let storyboard = UIStoryboard(name: "Sample", bundle: nil) let SampleVC = storyboard.instantiateViewController(identifier: "Sample") { coder in // VC wrapped for Snapshot Test MockSampleVC(coder: coder, completeHander: nil) } let navi = UINavigationController(rootViewController: SampleVC) navi.modalPresentationStyle = .fullScreen UIApplication.shared.rootViewController = navi testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } } class MockSampleVC: SampleVC { required init?(coder: NSCoder) { fatalError("init(coder: \\(coder) has not been implemented") } override init?(coder: NSCoder, completeHander: ((_ readString: String?) -> Void)? = nil) { super.init(coder: coder, completeHander: completeHander) } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) // The following methods are originally called in viewDidLoad() super.setNavigationBar() super.setCameraPreviewMask() super.cameraPreview() super.stopCamera() } } Still not fixed・・・ If the rendering is still misaligned, calling the layoutIfNeeded() method to update the frames often resolves the issue. import XCTest @testable import App final class SampleVCTests: XCTestCase { var record = false func testViewController() throws { let storyboard = UIStoryboard(name: "Sample", bundle: nil) let SampleVC = storyboard.instantiateViewController(identifier: "Sample") { coder in MockSampleVC(coder: coder, completeHander: nil) } let navi = UINavigationController(rootViewController: SampleVC) navi.modalPresentationStyle = .fullScreen UIApplication.shared.rootViewController = navi testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } } fileprivate class MockSampleVC: SampleVC { required init?(coder: NSCoder) { fatalError("init(coder: \\(coder) has not been implemented") } override init?(coder: NSCoder, completeHander: ((_ readString: String?) -> Void)? = nil) { super.init(coder: coder, completeHander: completeHander) } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) // Update the frame before calling rendering methods self.videoView.layoutIfNeeded() self.targetView.layoutIfNeeded() super.setNavigationBar() super.setCameraPreviewMask() super.cameraPreview() super.stopCamera() } } Looks good Snapshot for WebView screens There are situations where you may want to apply Snapshot Testing to toolbars to other elements, but not the content displayed in a Webview. In such cases, it is good to separate the part that loads the WebView content from the WebView’s configuration and mock the loading part during tests. For the implementation, we separate the method that calls self.WebView.load(urlRequest) etc. to display the Webview content from the method that configures the WebView itself. // Implementation in the VC class SampleWebviewVC: BaseViewController { // ... override func viewDidLoad() { super.viewDidLoad() self.setNavigationBar() **self.setWebView()** self.setToolBar() } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) **self.setWebViewContent()** } // ... /** * Separate the method for configuring the WebView from the method for setting its content */ /// Configure the WebView func setWebView() { self.webView.uiDelegate = self self.webView.navigationDelegate = self // Monitor the loading state of the web page webViewObservers.append(self.webView.observe(\\.estimatedProgress, options: .new) { [weak self] _, change in guard let self = self else { return } if let newValue = change.newValue { self.loadingProgress.setProgress(Float(newValue), animated: true) } }) } /// Set content for the WebView private func setWebViewContent() { let request = URLRequest(url: self.url, cachePolicy: .reloadIgnoringLocalCacheData, timeoutInterval: 60) self.webView.load(request) } // ... } Then, in the mock that wraps the VC under test, we make it so that the method that loads the WebView content is not called. import XCTest @testable import App final class SampleWebviewVCTests: XCTestCase { private let record = false func testViewController() throws { let storyboard = UIStoryboard(name: "SampleWebview", bundle: .main) let SampleWebviewVC = storyboard.instantiateViewController(identifier: "SampleWebview") { coder in MockSampleWebviewVC(coder: coder, url: URL(string: "<https://top.myroute.fun/>")!, linkType: .Foobar) } let navi = UINavigationController(rootViewController: SampleWebviewVC) navi.modalPresentationStyle = .fullScreen UIApplication.shared.rootViewController = navi testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } } fileprivate class MockSampleWebviewVC: SampleWebviewVC { override init?(coder: NSCoder, url: URL, linkType: LinkNamesItem?) { super.init(coder: coder, url: url, linkType: linkType) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } override func viewWillAppear(_ animated: Bool) { // Change the method that was called in viewDidLoad to be called in viewWillAppear self.setNavigationBar() self.setWebView() self.setToolBar() super.viewWillAppear(animated) } override func viewDidAppear(_ animated: Bool) { // Do nothing // Override to avoid calling the method that sets the WebView content } } Snapshot of the screen that is calling the camera Call the camera and also take the snapshot of the screen which displays a customized view. However, since the camera does not work on the simulator, it is necessary to find a way to disable the camera part while still being able to test the overlay. There was also a suggestion to insert a dummy image to make the camera work on the simulator, but it seems too costly to implement this just for the Snapshot Testing of a non-primary screen. In myroute’s Snapshot Testing, we used mocks to override the parts that handle the camera input and the parts that set up the capture to be displayed in AVCaptureVideoPreviewLayer, so they are not called. This way, the AVCaptureVideoPreviewLayer displays as a blank screen without any input, allowing the customized View to be shown on top. In the actual implementation, it is written as follows: class UseCameraVC: BaseViewController { // ... override func viewDidLoad() { super.viewDidLoad() self.videoView.layoutIfNeeded() setNavigationBar() setCameraPreviewMask() do { guard let videoDevice = AVCaptureDevice.default(for: AVMediaType.video) else { return } let videoInput = try AVCaptureDeviceInput(device: videoDevice) as AVCaptureDeviceInput if captureSession.canAddInput(videoInput) { captureSession.addInput(videoInput) let videoOutput = AVCaptureVideoDataOutput() if captureSession.canAddOutput(videoOutput) { captureSession.addOutput(videoOutput) videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main) } } } catch { return } cameraPreview() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) // Since the camera cannot be used in the simulator, disable it #if targetEnvironment(simulator) stopCamera() dismiss(animated: true) #else captureSession.startRunning() #endif } } Override them with mocks as follows: Due to the reasons described regarding the frame misalignment issue, we call the methods from viewWillAppear() that were originally called in viewDidLoad() . class MockUseCameraVC: UseCameraVC { // ... override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) self.videoView.layoutIfNeeded() super.setNavigationBar() super.setCameraPreviewMask() super.cameraPreview() super.stopCamera() } } The cameraPreview() method uses AVCaptureVideoPreviewLayer to display the camera image from the captureSession , but since we override it to have no input, it renders as a white view. CI Strategy At the initial stage of introducing Snapshot Testing, we uploaded reference images to a single S3 bucket. During reviews, we downloaded the reference images each time and ran the tests. However, when a view was modified and the reference images were updated simultaneously, there was an issue where tests for other PRs would fail until the PR with the updated reference images was merged. To address the issue, we created two directories within the bucket hosting the reference images. One directory hosts the images during PR reviews, and once a PR is merged, the images are copied to the other directory. By doing so, we ensure that updates to the reference images do not interfere with the tests of other PRs. Useful shells my route provides four shells for snapshots. The first one downloads all the reference images for the current screen. This allows the tests to pass locally. Used when switching from the # develop branch # Example: Sh setup_snapshot.sh # Clean up the old files from the reference images directory rm -r AppTests/Snapshot/__Snapshots__/ # Download reference images from S3 aws s3 cp $awspath/AppTests/Snapshot/__Snapshots__ --recursive --profile user The second shell uploads modified reference images to the PR review S3 bucket when creating a Pull Request. # When creating a PR, upload the modified tests as arguments. # Example: Sh upload_snapshot.sh ×××Tests path="./SpotTests/Snapshot/__Snapshots__" awspath="s3://strl-mrt-web-s3b-mat-001-jjkn32-e/mobile-app-test/ios/feature/__Snapshots__" if [ $# = 0 ]; then echo "No arguments provided" else for testName in "${@}"; do if [[ $testName == *"Tests"* ]]; then echo "$path/$testName" aws s3 cp "$path/$testName" "$awspath/$testName" --exclude ".DS_Store" --recursive --profile user else echo "($0testName) No tests found" fi done fi The third shell individually downloads the reference images for the modified screens. It is used when reviewing a Pull Requests that includes screen changes. # When reviewing tests, download the reference images for the specific tests # Example: Sh download_snapshot.sh ×××Tests if [ $# = 0 ]; then echo "No arguments provided" else rm -r AppTests/Snapshot/__Snapshots__/ for testName in "${@}"; do if [[ $testName == *"Tests"* ]]; then echo "$localpath/$testName" aws s3 cp "$awspath/$testName" "$localpath/$testName" --recursive --profile user else echo "($0testName) No tests found" fi done fi The fourth shell forcibly updates the reference images. Although it is basically unnecessary because the reference images for screens with modified test files are automatically copied, it is useful when changes to reference images occur without modifying the test files, such as when common components are updated. # If changes affect reference images other than the modified test files, (for example, when common components are updated), # Please upload manually # Please use it after merging # Example: Sh force_upload_snapshot.sh × × × Tests if [ $# = 0 ]; then echo "No arguments provided" else echo "Do you want to forcibly upload to the AWS S3 develop folder? 【yes/no】" read question if [ $question = "yes" ]; then for testName in "${@}"; do if [[ $testName == *"Tests"* ]]; then echo "$localpath/$testName" aws s3 cp "$localpath/$testName" "$awsFeaturePath/$testName" --exclude ".DS_Store" --recursive --profile user aws s3 cp "$localpath/$testName" "$awsDevelopPath/$testName" --exclude ".DS_Store" --recursive --profile user else echo "($testName) No tests found" fi done else echo "Termination" fi fi Since having four shells can be confusing regarding when and who should use them, we have defined them in the Taskfile and made the explanations easily accessible. When executing, we have to use -- when passing arguments such as specifying file names, making the command bit longer. As a result, we often call the shells directly. However, having this setup is valuable just for the sake of clear explanations. % task task: [default] task -l --sort none task: Available tasks for this project: * default: show commands * setup_snapshot: [For Assignee] [After branch switch] Used when making changes to Snapshot Testing after switching from the develop branch. (Example) task setup_snapshot or sh setup_snapshot.sh * upload_snapshot: [For Assignee] [During PR creation] Upload the snapshot images to the S3 bucket for PR review by passing the modified tests as arguments (Example) task upload_snapshot -- ×××Tests or sh upload_snapshot.sh ×××Tests * Download_snapshot: [For Reviewer] [During review] Download the reference images by passing the relevant tests as arguments (Example) task download_snapshot -- ×××Tests or sh download_snapshot.sh ×××Tests * force_upload_snapshot: [For Assignee] [After merging] If changes affect reference images other than the modified test files, (for example, when common components are updated), manually upload the changes by passing the modified tests as arguments. (Example) task force_upload_snapshot -- ×××Tests or sh force_upload_snapshot.sh ×××Tests Additionally, this is something I have set up personally, but I find it convenient to have an alias that changes the hardcoded profile name in the shell to the profile configured in your environment. (For those who prefer their own profile names) In this case, the profile hardcoded as user is changed to myroute-user . alias sett="gsed -i 's/user/myroute-user/' setup_snapshot.sh && gsed -i 's/user/myroute-user/' upload_snapshot.sh && gsed -i 's/user/myroute-user/' download_snapshot.sh && gsed -i 's/user/myroute-user/' force_upload_snapshot.sh" Bitrise In my route , we use Bitrise for CI. When a PR that includes changes to Snapshot Testing is merged, Bitrise automatically detects these changes and copies the reference images from the feature folder to the develop folder. This ensures that the snapshot tests always run correctly in all situations. Detecting subtle differences in reference images Sometimes, differences are too subtle to see with the naked eye, but snapshot tests will still detect them and report errors. Can’t see anything (3_3)? In such cases, using ImageMagick to overlay the images can help you spot the differences more easily. By running the following command: convert Snapshot/refarence.png -color-matrix "6x3: 1 0 0 0 0 0.4 0 1 0 0 0 0 0 0 1 0 0 0" ~/changeColor.png \ && magick Snapshot/failure.png ~/changeColor.png -compose dissolve -define compose:args='60,100' -composite ~/Desktop/blend.png \ && rm ~/changeColor.png You can see the overlaid images. Changing the hue of the reference image to a reddish tint before overlaying can make it easier to spot differences. For added convenience, I recommend adding this command to your bashrc. compare() { convert $1 -color-matrix "6x3: 1 0 0 0 0 0.4 0 1 0 0 0 0 0 0 1 0 0 0" ~/Desktop/changeColor.png; magick $1 ~/Desktop/changeColor.png -compose dissolve -define compose:args='60,100' -composite ~/Desktop/blend.png; rm ~/Desktop/changeColor.png } If the files are generally placed in the same location, you may only need to pass the test name as an argument instead of the entire path. Additionally, since images hosted online can also be processed, this method can be useful during reviews. To wrap things up, I bring Surprise Interviews! I interviewed my colleagues to get feedback on the implementation of Snapshot Test! Chang-san said: "Thanks to Hosaka-san’s initial research, we are now able to handle snapshots in a more convenient way. With the help of Ryomm-san, various implementation methods were organized into documents to ensure we didn’t forget anything. It has been really great, and I am very greatful🙇‍♂️. Hosaka-san said: “The biggest bottleneck is the time it takes to run full tests, so I would like to work on reducing that in the future." As for myself, I’ve noticed the frustration of having to fix Snapshot Tests when the logic changes but the screen remains unaffected. However, it’s been helpful to confirm that there were no differences when transitioning to SwiftUI, which I think was good!
アバター
はじめに こんにちは。 KINTOテクノロジーズ モバイルアプリ開発グループの中口です。 KINTOかんたん申し込みアプリのiOSチームでチームリーダーをしており、チームビルディングの一環として180度フィードバックを実施しましたのでその内容を共有します。 こちらのチーム振り返り会の記事 も同じチームで行った取り組みですので、ご興味あればご覧ください。 実施背景 先日、社内の有志メンバーで 「GitLabに学ぶ 世界最先端のリモート組織のつくりかた ドキュメントの活用でオフィスなしでも最大の成果を出すグローバル企業のしくみ」 の輪読会を行いました。 この輪読会については こちらの記事 で詳しくご紹介されています。 この輪読会は私にとって非常に刺激的で、ここでの学びを何かチームに持ち帰りたいと考えました。 その中でまず初めに興味を持ったものが360度フィードバックでした。 360度フィードバックとは、1人の従業員に対して同僚や上司、部下など複数人の視点からフィードバックをもらう評価手法です。 一般的にフィードバックと聞くと、上司が部下へフィードバックを行うことが多いかと思います。一方で私はメンバー同士、あるいは部下から上司へフィードバックを行うことも重要ではないかと常々考えていたため、この360度フィードバックを実施してみようと思いました。 ただし、輪読会の中で360度フィードバックは調査対象が広すぎたり、業務と直接関係のない人からの評価を受けたりなどデメリットもあるというお話があったので、調査対象を自チームのみに限定した180度フィードバックという手法を教えてもらい、そちらを行うことにしました。 狙い 私はこの180度フィードバックを通じて、以下のような狙いがありました。 チームが求める役割と、自身が認識している役割のギャップを知る 自身の強み、弱みを再認識し今後の成長に活かす チームメンバーが普段どんなことを思っているのか、本音を知る機会を作る アンケートに回答する過程で、メンバーのことを改めて考えることによりチームの一体感を高める この180度フィードバックは、関係の質を向上させるために、メンバー同士がお互いのことを理解し合う非常に良い機会になるのではないかと考えました。 実施方法 対象メンバー チームリーダー:1名 エンジニア:6名 調査方法 Microsoft Formsを使用し、匿名アンケートを実施 下記設問について自身を除く6名分を回答 定量評価(5段階評価) 積極的な姿勢に関する質問 相手を受け入れる姿勢に関する質問 意思決定における姿勢に関する質問 やりきる姿勢に関する質問 未知からの習得に関する質問 自主性に関する質問 定性評価(フリーテキストによる評価) 強みに関する質問 改善ポイントに関する質問 対象者へ感謝の言葉を送る 進める上で工夫したこと メンバーに前向きに取り組んでもらうため、事前に1on1の中で実施する背景や目的を共有する。 完全に匿名であることを認識してもらうため、事前にテストアンケートを実施して、その結果を共有する。 アンケート回答時間が無い、などの理由でアンケートの回収率が下がることを避けるため、あらかじめアンケート回答のための時間を用意する。 フィードバックをする以上、厳しい言葉やネガティブに感じる言葉をコメントする可能性があります。少しでも前向きな気持ちでアンケートを終了してもらいたいので、アンケートの最後に日頃の感謝を伝える項目を用意する。(また、フィードバック結果を見る際も、感謝のコメントが綴られていることで前向きにフィードバックを受け取ってもらいやすくなる) チームリーダーとしてオープンな姿勢を示したかったので、私のフィードバック結果はチームメンバーに開示し、今後の改善ポイントなどを共有する。(ただしメンバーへは自身のフィードバック結果の開示を強要しない) 私のフィードバック結果の要約 私のフィードバック結果を、要約してみたので下記に記載いたします。 強み コミュニケーション力が高く、気さくに話せる。例えば、ミーティングでは積極的に発言し、分かりやすく説明するよう心がけている。 積極的に学び、他のメンバーと情報を共有。新しい技術動向を常に把握し、Slackやミーティングなどで共有している。 チームワーク向上のため努力。定期的にチームイベントを企画し、メンバー同士の親睦を深めている。 情報収集力やスピード感のある対応力。問題発生時には迅速に対応し、関係者に正確な情報を伝えている。 思いやりが強く、頼りになる。メンバーの悩みに耳を傾け、適切なアドバイスをしている。 改善点 プロダクトに対する仕様理解。機能の仕様を十分に理解せずに開発を進めてしまうことがある。 タスクチケットの整理をもう少し頻度を上げて行う。チケットの優先順位付けが不十分なため、重要なタスクが後回しになることがある。 施策を行う際に背景や目的をしっかりと説明する。施策の意図が伝わっていないため、メンバーの理解が不足することがある。 リスクをとった行動が少ない。新しい取り組みに対して慎重になりすぎ、チャンスを逃すことがある。 強みの部分では、日頃意識して取り組んでいる部分が評価されていると思ったので非常に嬉しかったです。 一方で改善点に関しては、自分自身が自覚していることだけでなく、自覚できていなかった部分についても気づくことができ、今後の成長に活かすことができると感じました。 また最後にメンバーからの感謝の言葉もいただき、とてもモチベーションが上がりました。 今後もより一層チームに貢献できるよう努めていきたいと思います。 180度フィードバックを通して気付いたチームの強みと改善点 チーム全体についても要約してみました。 チーム全体の強み 多様な技術力とリーダーシップ : メンバー各自が高い技術力とリーダーシップを持ち合わせている。 コミュニケーション能力 : チーム内のコミュニケーションが活発で、情報共有が効果的に行われている。 問題解決能力 : 技術的な課題や難易度の高いタスクに対する積極的な取り組み。 学習意欲 : 新しい知識や技術への取り組みが積極的で、常に成長を続けている。 チーム全体の改善点 情報共有の効率化 : 新しい技術やプロジェクトの情報をより効率的に共有する方法の改善。 役割分担の明確化 : メンバーの能力を最大限に活用するための役割分担と責任のさらなる明確化。 大局的視点の養成 : プロジェクト全体の視点を持ち、タスクの目的と過程をチーム全体で共有することを重視。 技術共有とナレッジマネジメント : 技術やナレッジのチーム内横展開を促進し、全メンバーのスキルアップを図る。 また、各チームメンバーの強みや役割を下記図のようにまとめてみました。 実施後アンケート 180度フィードバックを実施した後に、調査を行ってみてどうだったかアンケートを実施しました。 (回答数は7名です) 期待値の変化 実施前:7.29→実施後:9.19 NPS (NPSとは?) 57 定期的(半年毎など)に180度調査を実施したいと思いますか? 86%が「Yes」と回答 「”参加した後”の満足度について、その理由を教えてください(フリーテキスト)」のAI要約 アンケートの結果から、回答者は自己認識を深め、自分の課題を見つけることができたと感じています。 また、他者の視点からのフィードバックを通じて、普段気づかない観点を得ることができ、 具体的な評価や改善点を知ることで、今後の行動指針が明確になったと述べています。 これらの結果は、アンケートが有効な自己反省のツールであることを示しています。 まとめ 今回実施した180度フィードバックに関して、運営面での下記のような課題がありました。 回答全体の平均点が高く、差がつきにくかった。 メンバーの入れ替えのタイミングと重なってしまい、一部のメンバーに適切なフィードバックとならなかった。 ただ、全体的には私も含めてメンバーの満足度の高いフィードバックができたと感じています。 アンケート結果からもわかる通り、メンバーの定期的な実施意向も高いため今後も引き続き取り組んでいきたいと思います。 今回のフィードバック結果を受け、私自身やチーム全体としての課題を再認識できましたので、今後の成長に活かしていきたいと思います。 また、メンバーも同様にそれぞれの課題を見つけて成長の機会としていただければとても嬉しいです。
アバター
Introduction (Overview of Activities) We started the "Manabi-no-Michi-no-Eki” at KINTO Technologies! So you'd ask, what is "Manabi (learning) + Michi-no-Eki (roadside station)" about? At our company, we do our best to foster a culture of output by hosting different activities including this Tech Blog, by presenting at events, or promoting various other initiatives. So, what drives our focus on output? We believe that input, or what we have learned, is a crucial prerequisite for output. That is why we created a team dedicated to strengthening our internal learning capabilities, initiated by volunteers within our company. The name "Michi-no-Eki (roadside station)" incorporates various ideas as well. Have you ever been to roadside stations in Japan? It gathers products from local communities, provides rest for travelers, and serve as hubs where you can encounter unique experiences found nowhere else. That's where our idea of Manabi-no-Michi-no-Eki (Roadside Station of Learning) comes from: a desire to create a unique place where everyone on the journey of learning can drop by, be thrilled by new encounters , and come together to be uplifted . What Does the Manabi-no-Michi-no-Eki Do? As a "roadside station" where study groups and workshops intersect, we aim to support internal activities centered around study sessions: Engaging in internal communications Letting everyone know what study groups are being held. Sharing what the current study groups are like. Supporting study groups For those who say, 'I want to start a study group', but I don’t know how to. For those who are organizing study groups but want to improve them. Offering advice on other concerns. Asking the Organizers: What Ideas Led to the Creation of 'Manabi-no-Michi-no-Eki'? Nakanishi: I have always believed that life is about learning. People constantly seek knowledge to find meaning in life, to find a place of solace in their hearts, and to energize their lives. The most fascinating people I have met so far who impressed me the most are those who are constantly learning new things; they shine the most. We believe that creating a company-wide space for colleagues to gather would enhance our daily work output. However, we began receiving feedback about the scattered information on internal study groups and a desire to understand the available learning environments. This prompted the launch of this project. HOKA: Working in human resources, I often hear during employee interviews a common desire for increased communication across different groups. This sparked a feeling that I wanted to do something about it. At the same time, through my work I have observed that successful people in KINTO Technologies often participate in study groups. These two points intersected, sparking the idea of creating a system where people could interact with each other while learning. When I discussed this idea with my boss, he introduced me to Kinchan and Nakanishi-san, and that is how the "Manabi-no-Michi-no-Eki" project was born. Kinchan: I have been involved with the culture of study groups on various occasions over the past 15 years. When I joined KINTO Technologies, I found that the company already had a good culture where learning is an integral part of everyday work. I wanted to expand this positive culture even further and contribute to the growth of people, our organization, and our business. That is why we've decided to take action by gathering information about study groups across the company. Establishment Step 1: Compile information on internal study groups! KINTO Technologies is an organization where voluntary learning activities led by employees such as study groups and reading circles are very active. Various study groups are held within the company, but questions often arise, such as 'where and when are they held?'. Some employees want to learn more about what's available. Having heard many voices, I wanted to give them more visibility. This was the starting point of our activities. We quickly gathered information and discovered that there were about 40 study groups. We were also aware of the existence of other hidden study groups, so we estimated that there were probably more than 60 groups in the company, including smaller ones. The three of us who found amazing that there were so many active study groups, started discussions at the end of November 2023. Step 2: What shall we do? In our first meeting, we listed what we wanted to do. Should we just storm into these study groups? Should we post about them on the Tech Blog more often? Many ideas came up, but we settled on the premise that it would be important to let people know about us internally first. So, we decided to participate in an in-house LT (Lightning Talk) event, which was to be held three weeks later on December 21. Without mentioning the "Manabi-no-Michi-no-Eki" yet, each of us three took the stage at it, and Kinchan won (yay!). First, we took action to make ourselves known to people within the company. Note: For more information, please see our Tech Blog article about the LT Event. ↓↓ We Held an In-House-Only LT (Lightning Talk) Event! Step 3: Make an inception deck! At our December 27, 2023 meeting, we realized the need for guidelines because we have so many things we wanted to do. We decided to create an "inception deck" from the beginning of the year. Inception deck is a software development tool to ensure that all team members have a common understanding of and goals for the development of a project. In ours, we clarified the following four points: Why We Are Here Elevator Pitch Not-To-Do List Our A Team By talking through the above, the name "Manabi-no-Michi-no-Eki (Roadside Station of Learning)" naturally came to mind, and we were able to decide on it without hesitation. In the process of creating our inception deck, we each shared our thoughts on learning with discussions of cooperative learning and about Peter Koenig’s Source Principle. It was a moment when I felt that the process of creating the inception deck itself was also a learning experience for us. And now: Let's Start the Engines! The inception deck was completed in late January 2024. When it was finished, we were a little impatient. We had a clear idea of our goals and tasks, and we were eager to get started right away. Kinchan, who proposed the inception deck, was probably secretly pleased, saying, 'just as expected.' As a first step to get things moving, we announced the birth of "Manabi-no-Michi-no-Eki" at the monthly All-Hands meeting with all KINTO Technologies members! At the same time, we also started the "Joining the next door study group" series. On February 22, we gathered everyone running study groups in a meeting room to interview them. Without having prepared any interview questions beforehand, we just pulled out our phones and recorded on the spot. Both the interviewers and the interviewees were very surprised. Although there was some confusion, they cooperated with us. (Thank you all!) We later edited unnecessary segments so that it could be played as a podcast, and we were able to successfully launch it to all employees via Slack on March 13. Our Next Steps We then run three study groups, published two podcasts, and published two blog articles, while reflecting and discussing our future! What do people want to know? Are they interested in the study groups? What do the organizers want people to know? As a result of the discussion, we came to the conclusion that "the purpose and needs of each study group are different. It would be better to individually assemble a story tailored to each of their characteristics." Moreover, What would be the role of our podcasts? Content as an advertisement for the study group? Content as internal newsletters? After considering these points, we came to the conclusion that "KINTO Technologies holds so many study groups," that to sum it up, "our goal will be achieved if we can give visibility to how rooted our study culture is." As for the future, we have decided to proceed with the activity of creating podcasts, running study groups, learning from any failures, and expanding wherever possible! In fact, I was a bit nervous about this agile approach—iterating, correcting, and steering things in a better direction. Before joining KINTO Technologies, I worked for a company with rigid rules and flows for handling information. As one of the organizers of 'Manabi-no-Michi-no-Eki,' this is an opportunity for me to learn about KINTO Technologies' development style of 'Make Small, Grow Big' while working in HR. The "Manabi-no-Michi-no-Eki" has just begun. We look forward to keeping you updated about it on the KINTO Tech Blog from time to time. Thank you very much for your support!
アバター
An Issue We Encountered During Testing With Spring Batch using DBUnit Introduction Hello. I am Takehana from the Payment Platform Team, Common Service Development Group[^1][^2][^3][^4][^5][^6] at the Platform Development Division. This time, I would like to write about an issue that we encountered while testing with Spring Batch + DBUnit. Environment Libraries, etc. Version Java 17 MySQL 8.0.23 Spring Boot 3.1.5 Spring Boot Batch 3.1.5 JUnit 5.10.0 Spring Test DBUnit 1.3.0 Encountered Issues We are using DB unit for testing Spring Boot 3 with Spring Batch. The Batch process follows the Chunk model, where ItemReader performs DB searches, and ItemWriter updates the DB. Given this setup, when running tests with data volumes exceeding the Chunk size, the tests did not complete... Investigations and Attempts Observations Code new StepBuilder("step", jobRepository) .<InputDto, OutputDto>chunk( CHUNK_SIZE, transactionManager) .reader(reader) .processor(processor) .writer(writer) .build(); I was testing a batch with the steps mentioned above as follows. @SpringBatchTest @SpringBootTest @TestPropertySource( properties = { "spring.batch.job.names: Foobar-batch", "targetDate: 2023-01-01", }) @Transactional(isolation = Isolation.SERIALIZABLE) @TestExecutionListeners({ DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class, TransactionDbUnitTestExecutionListener.class }) @DbUnitConfiguration(dataSetLoader = XlsDataSetLoader.class) class FoobarBatchJobTest { @Autowired private JobLauncherTestUtils jobLauncherTestUtils; @BeforeEach void setUp() { } @Test @DatabaseSetup("classpath:dbunit/test_data_import.xlsx") @ExpectedDatabase( value = "classpath:dbunit/data_expected.xlsx", assertionMode = DatabaseAssertionMode.NON_STRICT_UNORDERED) void launchJob() throws Exception { val jobExecution = jobLauncherTestUtils.launchJob(); assertEquals(ExitStatus.COMPLETED, jobExecution.getExitStatus()); } } When I set the test data to be less than the chunk size, the test passed without any issues. However, when the test data exceeded the chunk size, the test froze and never completed. (This occurred even with a chunk size of 1 and a data count of 1) Suspecting the issue might be on DB connections, I noted that Spring Batch treats each chunk as a single transaction. If processing in parallel, it would require more DB connections than the number of concurrent executions. So I adjusted the pool size to test this hypothesis. spring: datasource: hikari: maximum-pool-size: I changed 10 to 100 among other adjustments, but the issue was still not resolved… Start debugging I set up debug logs and ran the application to observe the behavior. The execution seemed to stop at the log output on line 88 of org.springframework.batch.core.step.item.ChunkOrientedTasklet . So, I set a breakpoint to verify. I then reached line 408 of org.springframework.batch.core.step.tasklet.TaskletStep . It appeared that the semaphore couldn’t acquire a lock (= waiting for the lock to be released), causing the execution to halt there. Delving deeper into Spring Batch Continuing my investigation, I traced the flow of execution in the step processing. The rough outline of the relevant parts is as follows. Execute doExecute of TaskletStep Create a semaphore Pass the semaphore to ChunkTransactionCallback , which is an implementation of TransactionSynchronization , link it with the transaction execution, and configure it in RepeatTemplate Step processing begins for the chunk The semaphore is locked in doInTransaction of TaskletStep Execute the main step processing The commit is executed by TransactionSynchronizationUtils` The AbstractPlatformTransactionManager ’s triggerAfterCompletion method is called, and the in-process invokeAfterCompletion` is executed. The semaphore is released in the afterCompletion method of the ChunkTransctionCallback by the invokeAfterCompletion. If data remains, return to 4 During this test run, the semaphore of 9 was not released, and it passed through 4 again and ended up freezing at 5 . Why was the semaphore not released...? During the review mentioned above, at Step semaphore release , I found the following condition in the relevant code. status.isNewSynchronization() did not become true , so invokeAfterCompletion was not executed. org.springframework.transaction.support.DefaultTransactionStatus#isNewSynchronization is as follows: /** * Return if a new transaction synchronization has been opened * for this transaction. */ public boolean isNewSynchronization() { return this.newSynchronization; } It returns whether a new transaction synchronization has been opened for this transaction. Considerations The current situation is that we haven’t fully traced yet why isNewSynchronization doesn’t become true . However, I thought I might be able to find some clues in the logs from our various trial and error attempts. If @Transactional is not applied to the test class 2024-03-27T08:57:14.527+0000 [Test worker] TRACE o.s.t.i.TransactionInterceptor - Completing transaction for [org.springframework.batch.core.repository.support.SimpleJobRepository.update] Foobar-batch 19 2024-03-27T08:57:14.527+0000 [Test worker] DEBUG o.s.orm.jpa.JpaTransactionManager - Initiating transaction commit Foobar-batch 19 2024-03-27T08:57:14.527+0000 [Test worker] DEBUG o.s.orm.jpa.JpaTransactionManager - Committing JPA transaction on EntityManager [SessionImpl(1075727694<open>)] Foobar-batch 19 2024-03-27T08:57:14.534+0000 [Test worker] DEBUG o.s.orm.jpa.JpaTransactionManager - Closing JPA EntityManager [SessionImpl(1075727694<open>)] after transaction Foobar-batch 19 2024-03-27T08:57:14.536+0000 [Test worker] DEBUG o.s.b.repeat.support.RepeatTemplate - Repeat operation about to start at count=2 Foobar-batch 19 If @Transactional is applied to the test class 2024-03-27T09:04:04.600+0000 [Test worker] TRACE o.s.t.i.TransactionInterceptor - Completing transaction for [org.springframework.batch.core.repository.support.SimpleJobRepository.update] Foobar-batch 20 2024-03-27T09:04:04.601+0000 [Test worker] DEBUG o.s.b.repeat.support.RepeatTemplate - Repeat operation about to start at count=2 Foobar-batch 20 When @Transactional is applied, "Initiating transaction commit..." from JpaTransactionManager with @Transactionalis not being logged. The test class uses TransactionalTestExecutionListener and executes within the same transaction using @Transactional . This ensures that the test data registered with DBUnit is accessible to code under test and is rolled back after the test is completed. However, I concluded that isNewSynchronization does not become true because existing transactions are being reused (a new transaction is not started) when the same step is executed. Workaround As a brute-force workaround to avoid using TransactionalTestExecutionListener , I performed the cleanup manually after each test, which successfully prevented the freeze. class FoobarTestExecutionListenerChain extends TestExecutionListenerChain { private static final Class<?>[] CHAIN = { FoobarTransactionalTestExecutionListener.class, DbUnitTestExecutionListener.class }; @Override protected Class<?>[] getChain() { return CHAIN; } } class HogeTransactionalTestExecutionListener implements TestExecutionListener { private static final String CREATE_BACKUP_TABLE_SQL = "CREATE TEMPORARY TABLE backup_%s AS SELECT * FROM %s"; private static final String TRUNCATE_TABLE_SQL = "TRUNCATE TABLE %s"; private static final String BACKUP_INSERT_SQL = "INSERT INTO %s SELECT * FROM backup_%s"; private static final List<String> TARGET_TABLE_NAMES = List.of( "Foobar", "fuga", "dadada"); /** * Create a test working table * * @param testContext * @throws Exception */ @Override public void beforeTestMethod(TestContext testContext) throws Exception { val dataSource = (DataSource) testContext.getApplicationContext().getBean("dataSource"); val jdbcTemp = new JdbcTemplate(dataSource); // Backup existing data to a temporary table before testing TARGET_TABLE_NAMES.forEach( tableName -> jdbcTemp.execute(String.format(CREATE_BACKUP_TABLE_SQL, tableName, tableName))); // Table initialization TARGET_TABLE_NAMES.forEach( tableName -> jdbcTemp.execute(String.format(TRUNCATE_TABLE_SQL, tableName))); } /** * Drop the test working table * * @param testContext * @throws Exception */ @Override public void afterTestMethod(TestContext testContext) throws Exception { val dataSource = (DataSource) testContext.getApplicationContext().getBean("dataSource"); val jdbcTemp = new JdbcTemplate(dataSource); // Restore the table TARGET_TABLE_NAMES.forEach( tableName -> jdbcTemp.execute(String.format(TRUNCATE_TABLE_SQL, tableName, tableName))); TARGET_TABLE_NAMES.forEach( tableName -> jdbcTemp.execute(String.format(BACKUP_INSERT_SQL, tableName, tableName))); } } Remove TransactionDbUnitTestExecutionListener and avoid using TransactionalTestExecutionListener. (Use DbUnitTestExecutionListener to lode the test data from Excel) Create a custom TestExecutionListener and move data from the target table to a temporary table during pre-processing, then restore it after the test. beforeTestMethod is executed before the test method, and afterTestMethod is executed after the test method. This approach made it possible to run tests while preserving Spring’s transaction management. Impressions Despite extensive searches, I couldn’t find satisfactory information, leaving the issue in a state of uncertainty. However, by looking further into the Spring Boot source code, I made various discoveries and it turned out to be a valuable learning experience through code reading. (Although I haven’t fully grasped everything yet…) I was wondering if I was fundamentally misunderstanding how to use Spring and the test libraries, questioning whether I was implementing them correctly based on the library creators’ assumptions and if there were more suitable classes available. This has highlighted that I still have much to learn. I would like to continue to approach exploration and improvement with the same curiosity, asking, “How does this work?” Thank you for reading this article. I hope this will be helpful to others facing similar issues. [^1]: Post 1 by a member of the Common Service Development Group [ Domain-Driven Design (DDD) incorporated in a payment platform intended to allow global expansion ] [^2]: Post 2 by a member of the Common Service Development Group [ Remote Mob Programming: How a Team of New Hires Achieved Success Developing a New System Within a Year ] [^3]: Post 3 by a member of the Common Service Development Group [ Efforts to Improve Deploy Traceability to Multiple Environments Utilizing GitHub and JIRA ] [^4]: Post 4 by a member of the Common Service Development Group [ Creating a Development Environment Using VS Code's Dev Container ] [^5]: Post 5 by a member of the Common Service Development Group [ Spring Boot 2 to 3 Upgrade: Procedure, Challenges, and Solutions ] [^6]: Post 6 by a member of the Common Service Development Group [ Guide to Building an S3 Local Development Environment Using MinIO (RELEASE.2023-10) ]
アバター
Introduction Hello. I am Nakaguchi from KINTO Technologies' Mobile App Development Group. I lead the iOS team for the KINTO Easy Application app which I will refer to as “the iOS team” in this article for convenience. We hold Retrospectives irregularly, but I find that they can be rather challenging. Am I succeeding in bringing out everyone's true feelings?? What are the team's real challenges?? Is my facilitation effective?? etc. I recently watched a webinar by Classmethod, Inc. and was so impressed by their session on "How to Build a Self-Managed Team" that I decided to apply for another training session they introduced in it about Retrospectives. In this article, I'll share my experience attending that session. Pre-Alignment Session Before the Retrospective, we had a meeting with Mr. Abe and Mr. Takayanagi from Classmethod. In order to hold Retrospectives that were best suited to our team’s situation, we discussed the current status of the iOS team with them for nearly an hour. Overview of the Retrospective On the day of the Retrospective, Mr. Takayanagi and Mr. Ito came to the company to facilitate the meeting. The meeting lasted for about two hours and followed this general flow: Self-introductions Aligning the purpose of our Retrospectives Individual exercise on “How to make the team a little bit better” Same content as above but in pairs Sharing the findings with the whole team Thinking about specific action plans in pairs Sharing the findings with the whole team Closing First Half Out of the almost two-hours meeting, it is worth noting that about half of the time was spent on "1. Self-introductions" and "2. Aligning the purpose of our Retrospectives". During the segment "1. Self-introductions", the facilitators asked us questions such as our names or nicknames, our roles in the team, or the extent of our interactions with other team members. They looked not only at the atmosphere of the team and the personality of each of us, but also at the relationships and compatibility between team members. During "2. Aligning the purpose of our Retrospectives", I got everyone to agree on what can be done to make the current team a little better , which was a topic I had requested. After a major release last September, our team is currently focused on improving features and refactoring, so although we are in a less busy spot, it seems that it is no easy feat to improve teams in our situation to make them a little better . I also explained the purpose, role, and expectations for each participant that I, as the meeting organizer, had in mind when inviting them. I was told that this helps clarify how everyone should participate and makes it easier for them to speak up. I think it was a good opportunity for me to talk about things that I usually don’t have the right timing for or that I can’t speak about directly. By spending this time in the first half of the meeting, we were able to create an atmosphere where it was easy for everyone to speak, and I felt that overall rapport was greatly improved. Facilitation Second Half After thinking about " 3. Making the team a little better" individually, we proceed on with the work. However, we didn’t use any framework related to retrospectives. Instead, we simply wrote down what could make the team a little better on sticky notes. We did individual work and then moved on to pair work. There are situations where pair work is beneficial and others where it is not. In this case, it seemed like the team benefited from it. Also, the combination of people is key, as it is important not to cause psychological strain amongst the participants. Pair Work After that, everyone gave presentations, and there were many opinions that I was not able to draw out in the Retrospectives I have held so far. I felt that I was able to draw them out through the rapport we built and the pair work in the first half. Then, based on the opinions that came up, everyone was asked to think about what specific actions should be taken and 6. Thinking about specific action plans in pairs. Then, each team presented their ideas. Presentation As a result, we decided to implement the following actions: Creating a Slack channel Having a place where everyone can chat freely Setting up a weekly meeting dedicated to chatting We could build more trust by talking more about ourselves, so we decided to create a private channel instead of a public one. Trying to gather together at meeting rooms as much as possible (as many people used to attend online to meetings from their desks even if they were in the office). Setting up guideline consultation meetings regarding assigned tasks Clearly stating the deadline on the task tickets We are addressing these issues as quickly as possible, starting the next day. Closing At the end of the meeting, Mr. Takayanagi talked about the importance of customizing meetings, such as understanding the time allocation of meetings, the characteristics of participants and to draw their opinions. In particular, at this Retrospective, he focused his facilitation on people , using a lot of pair work. Closing Post-Retrospective Survey Results Here are the results of the feedback survey taken after the Retrospective (out of 10 responses). Change in evaluation Before: 6.3 -> After: 9 NPS 80 (What is NPS?) AI summary of "How satisfied did you feel after you participated?" (free text) The survey results showed that participants were happy with the session and the facilitator's explanations. In addition, there were many positive comments about how specific decisions were made that led to the next actions. Furthermore, the opportunity to understand the thoughts of other team members, and the ability to listen to things that are not normally heard, were also highly evaluated. These results suggest that the meeting was meaningful for everyone. ** Just being above 0 was a great, but there was a whopping NPS of 80! ** Final Thoughts Through this Retrospective, I realized that there were many members who felt that there was a lack of communication, and we were able to focus on the next course of action so it was a very fulfilling Retrospective. I was happy to see from the questionnaire results that the participating members were also satisfied. I also realized that the role of the meeting facilitator is very important. This is a very advanced skill that cannot be acquired overnight, and I think that the organization should focus on developing and acquiring such skills. To start with, I would like to study facilitation and become able to conduct better meetings.
アバター
Introduction Hello. I am Nakaguchi from KINTO Technologies' Mobile App Development Group. I lead the iOS team for the KINTO Easy Application app which I will refer to as “the iOS team” in this article for convenience. We hold Retrospectives irregularly, but I find that they can be rather challenging. Am I succeeding in bringing out everyone's true feelings?? What are the team's real challenges?? Is my facilitation effective?? etc. I recently watched a webinar by Classmethod, Inc. and was so impressed by their session on "How to Build a Self-Managed Team" that I decided to apply for another training session they introduced in it about Retrospectives. In this article, I'll share my experience attending that session. Pre-Alignment Session Before the Retrospective, we had a meeting with Mr. Abe and Mr. Takayanagi from Classmethod. In order to hold Retrospectives that were best suited to our team’s situation, we discussed the current status of the iOS team with them for nearly an hour. Overview of the Retrospective On the day of the Retrospective, Mr. Takayanagi and Mr. Ito came to the company to facilitate the meeting. The meeting lasted for about two hours and followed this general flow: Self-introductions Aligning the purpose of our Retrospectives Individual exercise on “How to make the team a little bit better” Same content as above but in pairs Sharing the findings with the whole team Thinking about specific action plans in pairs Sharing the findings with the whole team Closing First Half Out of the almost two-hours meeting, it is worth noting that about half of the time was spent on "1. Self-introductions" and "2. Aligning the purpose of our Retrospectives". During the segment "1. Self-introductions", the facilitators asked us questions such as our names or nicknames, our roles in the team, or the extent of our interactions with other team members. They looked not only at the atmosphere of the team and the personality of each of us, but also at the relationships and compatibility between team members. During "2. Aligning the purpose of our Retrospectives", I got everyone to agree on what can be done to make the current team a little better , which was a topic I had requested. After a major release last September, our team is currently focused on improving features and refactoring, so although we are in a less busy spot, it seems that it is no easy feat to improve teams in our situation to make them a little better . I also explained the purpose, role, and expectations for each participant that I, as the meeting organizer, had in mind when inviting them. I was told that this helps clarify how everyone should participate and makes it easier for them to speak up. I think it was a good opportunity for me to talk about things that I usually don’t have the right timing for or that I can’t speak about directly. By spending this time in the first half of the meeting, we were able to create an atmosphere where it was easy for everyone to speak, and I felt that overall rapport was greatly improved. Facilitation Second Half After thinking about " 3. Making the team a little better" individually, we proceed on with the work. However, we didn’t use any framework related to retrospectives. Instead, we simply wrote down what could make the team a little better on sticky notes. We did individual work and then moved on to pair work. There are situations where pair work is beneficial and others where it is not. In this case, it seemed like the team benefited from it. Also, the combination of people is key, as it is important not to cause psychological strain amongst the participants. Pair Work After that, everyone gave presentations, and there were many opinions that I was not able to draw out in the Retrospectives I have held so far. I felt that I was able to draw them out through the rapport we built and the pair work in the first half. Then, based on the opinions that came up, everyone was asked to think about what specific actions should be taken and 6. Thinking about specific action plans in pairs. Then, each team presented their ideas. Presentation As a result, we decided to implement the following actions: Creating a Slack channel Having a place where everyone can chat freely Setting up a weekly meeting dedicated to chatting We could build more trust by talking more about ourselves, so we decided to create a private channel instead of a public one. Trying to gather together at meeting rooms as much as possible (as many people used to attend online to meetings from their desks even if they were in the office). Setting up guideline consultation meetings regarding assigned tasks Clearly stating the deadline on the task tickets We are addressing these issues as quickly as possible, starting the next day. Closing At the end of the meeting, Mr. Takayanagi talked about the importance of customizing meetings, such as understanding the time allocation of meetings, the characteristics of participants and to draw their opinions. In particular, at this Retrospective, he focused his facilitation on people , using a lot of pair work. Closing Post-Retrospective Survey Results Here are the results of the feedback survey taken after the Retrospective (out of 10 responses). Change in evaluation Before: 6.3 -> After: 9 NPS 80 (What is NPS?) AI summary of "How satisfied did you feel after you participated?" (free text) The survey results showed that participants were happy with the session and the facilitator's explanations. In addition, there were many positive comments about how specific decisions were made that led to the next actions. Furthermore, the opportunity to understand the thoughts of other team members, and the ability to listen to things that are not normally heard, were also highly evaluated. These results suggest that the meeting was meaningful for everyone. Just being above 0 was a great, but there was a whopping NPS of 80! Final Thoughts Through this Retrospective, I realized that there were many members who felt that there was a lack of communication, and we were able to focus on the next course of action so it was a very fulfilling Retrospective. I was happy to see from the questionnaire results that the participating members were also satisfied. I also realized that the role of the meeting facilitator is very important. This is a very advanced skill that cannot be acquired overnight, and I think that the organization should focus on developing and acquiring such skills. To start with, I would like to study facilitation and become able to conduct better meetings.
アバター
​KINTOサービスの認証基盤について、開発を担当しているPham Hoangです。本記事では、Global KINTO ID Platform (GKIDP) に実装されたパスキーについてお話します。 OpenID Summit Tokyo 2024 に参加して、OIDC と組み合わされたパスキーについて伺ってから、パスキーが私たちのIDプラットフォームにどれだけ多くの利益をもたらすかついて、お伝えしたいと思いました。 I. GKIDP でのパスキーの自動入力 パスキーは、パスワードの代替となるもので、ユーザーの端末からより速く、より簡単に、より安全に、ウェブサイトやアプリへサインインすることができます。以下は、ユーザーがワンクリックでパスキー認証を行う方法です。 ![](/assets/blog/authors/pham.hoang/fig1.gif =400x) 図1.KINTO ItalyのIDプラットフォームへパスキーでログインする様子 パスキーの素晴らしいところはシームレスなUXで、パス ワード の自動入力機能と同じです。ユーザーはパスキーとパスワードの複雑な違いを理解する必要はありません。このシステムは、ユーザーが覚えておく必要のあるパスワードなどを使わずに、裏側で非対称暗号化を使用します。FaceID認証だけで、すべての設定が完了します。 パスキーは、2022年後半からAndroidとiOSによってサポートされている、最も安全で最先端の認証システムです。まだ開発中で、現在もアップグレードされ続けています。GKIDP (Global KINTO ID Platform)に最新技術で便利な状態を保つため、2023年7月にパスキーの自動入力機能を導入しました。この導入は、メルカリ、ヤフージャパン、GitHubやMoneyForwardでそれぞれ導入したすぐあとのことです。 次のパートでは、パスキーをFederated Login(連携ログイン)に活用し、GKIDPユーザーが「グローバルログイン」機能をより快適に利用できるようにする方法について説明します。 II. Federated Identityにおけるパスキー Global KINTO ID Platform (GKIDP) は、2024年3月時点でイタリア、ブラジル、タイ、カタールと南米各国に導入されているKINTOサービスの認証システムです。GDPRおよびその他のデータ保護規制に遵守するため、GKIDPは各国ごとに複数のIDプロバイダー(IDP)に分けられており、「コーディネーター」を通してユーザーを一つのグローバルIDとして識別します。グローバルID を活用することで、ユーザーは世界中のKINTOサービスを共通のIDで利用することができます。 図2.GKIDP とパスキー対応のIDP 通常、パスキーでログイン(図1を参照)をすると、ユーザーはローカルIDPを使用して認証連携を行い、自国内のKINTOサービスを利用できます。しかし、私たちの場合、RP(Relying Party)のアプリケーションまたはブラジルの KINTO ONE Personal やその他のKINTOサービスのような「サテライトサービス」でパスキー機能が使えないといけないため、各国のIDP (例:ブラジルIDP)にパスキーを実装しました。 この利点について、私たちが参加した OpenID Summit Tokyo 2024 でも取り上げられており、パスキーをOpenID Connectプロトコルと組み合わせて実装することが推奨されていると知れて良かったです。 さらにGKIDPには、KINTOサービスがある他国にユーザーが旅行や引越しをした際、自国サービスと同様に国外サービスでもKINTOまたは関連サービスにログインできる独自の機能があります。これを私たちは「グローバルログイン」機能と呼んでいます。利用には複数のステップが必要ですが、1つのユーザー名とパスワードで管理できるので、サービスごとにユーザー名とパスワードを覚えなくても良くなります。さらにパスキー実装によって、ログイン情報を覚えたり入力したりする必要なく、簡単な手順でグローバルユーザーのログインプロセスの無駄をなくします。例えば、イタリアのKINTO GOユーザー(図1のユーザー)が、グローバルログインを利用してタイのKINTO SHAREサービスにアクセスする方法を見てみましょう。わずか数回のクリックでログイン時間を平均2~3分から約30秒に短縮することができています(図3)。ローカルIDPがパスキーをサポートしているかどうかに関係なく、1つのパスキーを使用してすべてのKINTOサービスにアクセスできます。 ![](/assets/blog/authors/pham.hoang/fig3.gif =300x) 図3. パスキーによるグローバルログイン パスキーは、ローカルログインとグローバルログインだけでなく、再認証などを含むすべての認証画面にも活用されています。一度パスキーが登録されると、ユーザーは何かを確認するためのパスワードをもはやほとんど必要としません。 III. パスキーとその需要 図4. パスキー登録ユーザー イタリアのIDPでは、875名のユーザーパスキーを利用して登録しており、パスキーリリース後の新規ユーザーの52.2%を占めています。パスキーの自動入力をサポートするOSにアップデートするユーザーが増えるにつれてにつれて、この割合も増えることを期待しています。(iOS 16.0以上、Android 9以上) デスクトップユーザーが多くを占めるKINTO Brazilでは、Microsoft PCでパスキーが広く利用されていないにも関わらず、リリース後の新規登録ユーザー1176人のうち20%以上がパスキーを使用しています。 IV. さいごに KINTOのエンジニアとして、パスワードレスの未来のために新しい技術を導入し、ユーザーのデータ保護を強化できることをとても嬉しく思います。パスキーを活用することで、ユーザーは最高レベルのセキュリティで簡単にログインできるようになりました。これからも、世界中のKINTOサービスを新しく我々のIDPハブ:GKIDPに繋ぐことができるのを楽しみにしています。 Hoang Phamの他の記事はこちら: https://blog.kinto-technologies.com/posts/2022-12-02-load-balancing/
アバター
[[[Amazonへのリンク]]]( https://amzn.asia/d/06GXK0Fd ) ハンス・P・バッハー、サナタン・スルヤヴァンシ共著 『Vision』の内容を忘れないよう備忘録としてまとめようと考えておりましたが、とても良い本なので共有したいと思い、ここにその一部を紹介いたします。 日常に溢れるデザインされたビジュアルは、私たちに様々な感情を呼び起こします。なぜ特定のビジュアルが私たちに強い印象を与えるのか、またその背後にある心理をどのように理解するかを、この本は解き明かしてくれます。 著者はビジュアルを通じてストーリーを語るための具体的な方法、例えば色彩や形の選択が感情にどのように作用するかを教えてくれます。これにより、専門家でなくとも日々の視覚的体験を豊かに解釈できるようになると思います。 『Vision』を読むことで、私達の日常に新たな視点が生まれると思います。このブログを通じて興味を持たれた方は、ぜひ手に取ってみることをお勧めします。 こちらの書籍は以下の内容で構成されています。 序文 はじめに ビジュアルコミニュケーションのプロセスとは 画像の心理学 ライン シェイプ 明度 色 光 カメラ 構図 まとめ 今回はこの中で「ビジュアルコミニュケーションのプロセスとは」「画像の心理学」「ライン」の内容をかいつまんで紹介していこうと思います。 ビジュアルコミニュケーションのプロセスとは ビジュアルコミニュケーションのプロセスとは、目から入ったものが瞬時に様々な感情を引き起こす自動的処理だと著者は言っています。 例えば、「薄暗い路地に伸びる影」「そこで恐怖におののく人」が描かれた映画のポスターを見るだけで、私達はその映画が不安や恐怖をテーマにしているのだと直感的に認識します。この瞬間的な感情の反応は自動的に引き起こされているものなのだということです。 この本の目的は、読者がこのような自動的処理をプロセスや要素に分解し、なぜそういった気持ちが引き起こされるのかを理解できる力をつけることだと述べられており、早速次の章ではこの自動処理を心理的側面から説明してくれます。 画像の心理学 画像を見て、リラックスしたり恐怖を感じたりするのはなぜか。このプロセスを説明するにあたり画像が及ぼす心理学的側面の三要素について言及しています。 ①関連付け ②メカニズム ③響くとき ①関連付け 例えば、薄暗い路地裏と暗い影が組み合わさると、恐怖を感じることが一般的です。このように、画像や映像は私たちの過去の記憶にリンクしており、脳はこれらを見ると自動的に特定の感情を想起させるそうです。これは「連想」のプロセスに似ています。 したがって、適切なビジュアル要素を選択し関連付けることで、作品は見る人に強烈な印象を与えることができまのだといいます。 ②メカニズム 視覚デザインにおいて、ライン、シェイプ、色といった要素の組み合わせは重要な役割を果たします。例えば対立色※1が隣り合わせに配置されると対比が生じて刺激を生み出します。この様に視覚要素が相互作用して時に刺激や調和を生じさせるということです。 ③響くとき 「言わんとすることが「響く」のは伝えようとする内容とその伝え方が一致したときだ。」(引用 p20) 例えば大切な人の悲痛なる死を語る場面にポップなカラーリングを使用した場合、その悲しみは伝わりづらくなるといった具合に、内容と伝え方が一致してないものは見ている人の心に響かないということです。 こうした色彩などのデザイン要素を積極的に組み合わせることで、『絵』の魅力が向上すると、著者は強調しています。さらに、そうした要素を「偶然」や「あるがまま」に任せるべきではなく、意図的に選択することによって見る人の感情に訴えかけるべきだと述べています。 画像のアナトミー アナトミーとは「解剖学」のことです。 以下に列挙した項目を使って「絵」を分解していくことで「見方」を構築していくことが可能になるといい、それがビジュアルでストーリーを語るための基本だと著者は述べています。そしていつでも見返すことが出来るようにしておくことをお勧めしています。 被写体 文字通り被写体のこと。 フォーマット: 画像の縦横比。 向き: 縦長もしくは横長。 フレーミング: 構図内の配置。 ライン: 線状の要素。 シェイプ: フレーム内の形状。 明度(バリュー): 明るさまたは暗さの度合い。 色: 文字通り色のこと。 パターン: デザインまたは繰り返しの要素。 シルエット: デザイン要素の輪郭内を黒く塗りつぶしたもの。 テクスチャ: デザイン要素の輪郭を示す情報。 光: 明るく輝く要素。 奥行き: 空間の感覚。 エッジ: シェイプを隔てる境界の強弱。 動き: すべての動く要素。 ライン ラインは「構図線」、「コンポジショナルライン」と呼ばれ、視線がたどる経路を作り出します。基本すぎて軽視されがちですが、多様な側面を持ち様々な演出を可能にする力を持つと著者は述べています。下図は主なラインの例の図解(一部抜粋)となっています。 フレームの境界線。1~4 全てに該当。どの構図にも必ず存在する上下左右の枠線のことです。 1・2:構図内の人物が、その方向に応じて構図線になっています。 3:オブジェクトの実際の動きおよび暗示された動きが、明確なラインを形成しています。 4:暗い塊が、構図線になっています。 ラインの方向 ラインの方向とはフレーム上下左右の枠線に対するラインの位置関係のことです。ラインの方向で感情を表現することが可能で、適切なモチーフと組み合わせることで豊かな感情を表すことができます。 例) 垂直:重力に抗う強さ、気品(木や建物など頭上高くそびえるもの) 斜め:水平垂直に対するコントラストにより、ドラマ、エネルギー、ダイナミックさ(崩れたバランスと動感) 水平:穏やか、静か (水平線、海、開けた場所) ラインの配置 ラインの配置によってフレームが分割され、シェイプが生み出されます。そのシェイプのバランスによって構図の魅力が変化します。 均等分割、左右対称:非自然的、人工的。 非対称:バランス次第で魅力的になる。三分割、黄金比など。 ラインの質 ラインの質や特徴は感情を強く喚起します。 直線:緊張感 曲線:ソフト感 太線:力強さや頑丈さ 極細線:洗練、繊細さ 調和と対比 フレーム内にラインを描いた途端に、調和か対比が生み出されます。つまりライン同士の関係がリズム、調和、不調和、バランス、アンバランス、統一などを生み出すということです。 例えば下端に水平なラインは調和を生み出すが、それを斜めにすることによって途端に対比が生じることになる。しかし調和も対比も行き過ぎると退屈さや煩雑さにつながるのでバランスには注意が必要だということです。 リズム ラインを繰り返すことによってリズムが生じ、構図に新たな側面が加わります。 一定間隔で規則的なライン:整然さ、(退屈さ) ランダムな繰り返し:エネルギッシュ、緊張感 【まとめ】 適切に関連付けされたデザイン要素を使用することにより上手くメカニズムが働き見る人の心に響くビジュアルになる。たとえシンプルなラインという要素であっても感情や緊張感、退屈さ、調和、対比といった演出が可能だということです。 さらに著者が繰り返しているのは、「ディテールにとらわれず、単純化して考える。」ということです。それを繰り返すうちに構図作りに対する理解が深まり、自分なりに応用を利かすことができるようになるはずだ、と述べています。 以上、序盤を一部をご紹介するという形で書かせていただきました。ご紹介した部分だけでもビジュアルの分析について視野が広がると感じていただけるのではないでしょうか。 また機会がありましたら他の章もご紹介できたらと思います。
アバター
はじめに こんにちは!iOSエンジニアのViacheslav Voronaです。チームメンバーと一緒に今年開催のtry! Swift Tokyoに参加したことで、Swiftコミュニティ全体の動向について考えることができました。かなり新しいものもあれば、前々からあったけれど最近になって進化したものもあり、本記事では私の所感を皆さんにお伝えします。 見て見ぬふりはできない話題... まずは避けて通れないこの話題から。待望のApple Vision Proが発売されたのは、try! Swift開催のおよそ2ヵ月前でした。try! Swiftの会場がAppleファンで溢れていたのにも納得いきます。Apple Vision Proをまだ試着したことの無い人たちは、「数分だけでも装着してみたい!」と、そのチャンスを切望していました。 Satoshi Hattori氏による「SwiftでvisionOSのアプリをつくろう」の会場は満席でした。アプリ自体は、ユーザーの仮想空間に 円形のタイマー を浮かべるだけのシンプルなものでしたが、服部さんが実際にヘッドセットを装着し、リアルタイムでワークの結果を見せ始めると、会場は大きく盛り上がりました。 また、本カンファレンスの2日目には空間コンピューティングのファンたちが小さな非公式ミーティングを開いていました。Appleの他のデバイスとは異なり、Vision ProはSwiftコミュニティ内で、独自のサブコミュニティを形成しています。映画で近未来的な仮想デバイスを見て育った人たちは、サイバーパンクの夢に近づいていることを実感し始めているのです。エキサイティングである反面、人によっては脅威に感じるかもしれません。 そしてもちろん、カンファレンスのオープニングでの「Swift Punk」のパフォーマンスもVision Proにインスパイアされたものだということは忘れずに触れておきます。 $10000+の小道具で行われたオープニングパフォーマンス Swiftの新境地 最先端のトレンドではなくても、最近は多方面において興味深い開発が進められています。つまり、Swiftコミュニティが、Appleデバイスの領域を超えてさらに拡大しようとしているということです。 サーバーサイドSwiftなどは以前から存在しています。 Vapor は2016年にリリースされ、広く採用されたわけではないですが、今も稼働し続けています。Vapor Core Teamの Tim Condon 氏により、try! Swiftで大規模なコードベースの移行について大変興味深いプレゼンを聞くことができました。これはVaporがversion5.0でSwift Concurrencyを完全にサポートするために現在進めている移行に大きく影響されています。Tim氏によると、そのバージョンは2024年夏にリリースされる予定なので、サーバーサイドSwiftを試してみたい方にとっては始めるのに絶好のタイミングかもしれません。 Vaporの仕掛け人、Tim Condon氏。シャツが良い感じ! Swiftで書かれたAPIに合わせて、同じSwift言語を使ってWebページを実装してみることもできます。これは Paul Hudson 氏のトークテーマでした。Swiftリザルトビルダーを利用したHTML生成に関するPaul氏の講演は、経験豊かな彼だからこそできるもので、とてもおもしろかったです。スピーチのクライマックスは、彼がスピーチで話していたのとまったく同じ原理を使った新しいサイトビルダー、 Ignite の発表でした。 Paul Hudson氏: Igniteも含め多くのものを裏で支えている仕掛け人 このカテゴリーでもう一つ印象的だったのは、クロスプラットフォームSwiftをこよなく愛する Saleem Abdulrasool 氏によるもので、WindowsとmacOSの違いと類似点、そしてSwift開発者がWindowsアプリケーションを作ろうとする際に直面する課題について話してくれました。 最後に忘れてはいけないのが、 Yuta Saito 氏によるSwiftのバイナリ削減ストラテジーについてです。一見、私が本記事で書いているトレンドとは関係無いように見えますが、齋藤さんが Playdate という小さなゲーム機にデプロイされたシンプルなSwiftアプリを見せたときに、無関係ではないことに気づきました。感動的でした。 SwiftがAppleのプラットフォームで新しい機能を得るだけでなく、新しい領域も絶えず探求しているのは喜ばしいことです。 "ザ・コンピューター (パラノイア)" 最後に、ここ数年あちこちで話題となり、新しい「なによりも強力な」モデルが次々と出てくるAIやLLMなどのトピックについてお話します。デジタル・ゴールドラッシュの昨今、ソフトウェア企業はAI処理をありとあらゆるものに適用しようとしています。もちろん、Swiftコミュニティもその影響を受けずにはいられません。try! Swiftでも、この傾向が随所に見られました。 カンファレンスで最初に行われたプレゼンの一つは、Duolingoのエンジニアである Xingyu Wang 氏によるものでした。OpenAIと共同で導入したロールプレイ機能について、AIを搭載したバックエンドの活用、AI生成にかかる時間を最適化するための挑戦、そしてそれを軽減するためにXingyu氏のチームが適用したソリューションについて語られました。全体的に前向きで、AIが秘める無限の可能性を明るいイメージで描かれていたのを覚えています。 一方で、カンファレンスの前に私が注目したのは、 Emad Ghorbaninia 氏による "AIがない未来を考える / What Can We Do Without AI in the Future?"のセッションです。どんな内容なのか、とても興味を持っていました。実際に聴講して、AIのさらなる発展に伴い、開発者として、そして人間として、私たちが今後直面するであろう課題について深く考えさせられました。Emad氏の考えによると人工知能に対抗するためには、人間が最もその強みを出せる創造的なプロセスに焦点を当てるべき、とのことでした。反論できません。 さいごに try! Swift Tokyoでのディスカッションをふり返り、Swiftコミュニティの進化や最新の技術動向に適応していっている様子は非常に興味深いです。Apple Vision Proのような革新的なハードウェアを取り入れることから、サーバーサイドSwiftやAIの統合といった新たな領域の開拓まで、今回見えた進展は技術の動向に広く敏感に対応するコミュニティの姿勢を浮き彫りにしています。この好奇心とイノベーションへの情熱が、SwiftをiOS開発に限定された言語ではなく、ソフトウェアの可能性を広げるための強力なツールセットにしています。今後も、開発者の創造性と技術のダイナミックな相互作用はSwiftコミュニティ内でさらにエキサイティングな進歩をもたらすことが期待されます。この活気に満ちたエコシステムの一員となれることは非常に楽しみです!
アバター
はじめに こんにちは!KINTOテクノロジーズでiOSアプリケーションを開発しているFelixです。Swiftに焦点を当てたカンファレンスに行くのは初めてでした。2024年3月22日から24日まで、渋谷で開催されたtry! Swift 2024 Tokyoに参加しました。業界の最新トレンドに触れ、他のエンジニアとのネットワークを広げる絶好の機会となりました。 プレゼン いろいろな説得力のあるプレゼンの中で、特に印象に残ったものを2つ挙げさせてください。 まず、DuolingoのAIチューター機能についてのプレゼンです。講演者のXingyu Wangさんは、AIチューター機能の実装に関して講演されました。また、チャットインターフェイスの構築や、有益なフレーズのレイテンシーの最適化といった課題に触れ、GPT-4の機能を活用した解決策を紹介しました。フロントエンドだけでなく、現在直面している課題にも言及しながら、ソフトウェア全体のアーキテクチャーについてお話しいただけて非常に良かったです。個人的な話ですが、以前私は「日本人ユーザー向けの英語学習アプリを開発する」という、同じような目標を持っていました。この知識は、同じようなサービスを作る上で非常に有用です。よくできたロールプレイ機能を組み込むことで、学習者が実生活に近い環境で会話スキルを練習することができるようになります。 もう一つご紹介したいのは、フレームワークのコミュニティで有名なPoint-Freeによるものです。Swiftのversion 5.9で導入されたSwiftマクロテストに関する発表が特に印象的でした。コンパイラプラグインであるマクロは、新しいコードや診断、修正を生成することで、Swiftのコードを強化します。プレゼンターのお二人は、Swiftの微細なニュアンスを強調し、これらのマクロを作成することやテストすることの複雑さを紹介くださいました。また、彼らのテストライブラリであるswift-macro-testingが、マクロのテストプロセスをより簡素化し、効率的かつ効果的にすることで、Appleのツールを向上させる方法についても示してくださいました。プレゼンターの方々がSwiftを深く理解した上で開発ワークフローの改善に向けて革新的なアプローチを取っているかがわかりました。 ブース ブースエリアは、企業と交流したり、ノベルティを集める参加者でにぎわっていました。サイバーエージェントのブースは特に魅力的で、参加者がポストイットにコードの要約を書き込めるホワイトボードが設置されていました。このインタラクティブなブース企画は、知識を深めるのに役立ったのと同時に、Swiftへの関心をさらに高めるのに効果的だと思いました。 今回のカンファレンスでは、通常の質疑応答ではなく、プレゼンのあとに質問がある人は指定されたブースで登壇者と直接会って話すことができる、という新しいスタイルが採られていました。これより、参加者がより質問しやすくなり、登壇者との交流ができるため、より良いコミュニケーションやネットワーキングの機会になったと思います。 ワークショップ カンファレンス最終日には、好きなワークショップを選んで参加することができました。私はTCAに関するワークショップを選び、約200人を収容する大きな部屋の後ろの方に座りました。このワークショップでは、主にコンポーザブル・アーキテクチャーを使用してサンプルの「SyncUp」アプリを開発する方法について解説されていました。私も最初は一緒にコーディングしようとしましたが、最終的には見学することにしました。興味深い点は、このフレームワークが副作用を管理するための構造化されたアプローチを提供していることです。アプリの外部と相互作用する部分がテスト可能で、理解しやすいものとなっています。ユニットテストのプロセスは特に効率的で明確に見えました。 さいごに 今回初めてtry! Swift Tokyoに参加して、非常に充実した良い経験となりました。このカンファレンスは業界のリーダーや仲間とつながるためのプラットフォームとなっており、私は最先端のSwift技術に夢中になりました。プレゼンは有意義で、iOS開発における現実世界の課題と創造的なソリューションについて深く掘り下げた内容が提供されていた印象でした。インタラクティブなブース企画や専門分野に特化したワークショップは、非常に良い学習やネットワーキングの機会となり、このカンファレンスの大きな価値となっていました。最後まで読んでくださり、ありがとうございました!この記事を読んでご興味を持たれた方はぜひ来年のtry! Swiftにぜひご参加ください!
アバター
Introduction Hello, Tech Blog readers. We have recently decided to implement Marketing Cloud and to use the " Norikae GO email delivery" in it, considering the creation of a Journey to trigger an automated process instead of sending individual emails. A Journey is a feature that automatically deploys multiple marketing strategies when a customer takes a specific action. For example, when a customer clicks on a specific link in an email, the relevant information is automatically delivered as part of an automated marketing process. Unfortunately, we were having troubles finding a way to add Journey Builder as an activity in Automation Studio. So, I have summarized in this article the results of the various trials we did. Email Delivery Partner There are several reasons for using Journey Builder: ・Can leverage branching, randomness, and engagement ・Can be integrated with Salesforce, for example, when creating tasks and cases, updating objects, etc. However, Journey Builder does not allow you to execute scripts or SQL queries. For example, you need to merge synced data sources before sending a large volume of emails. In such cases, Journey Builder must be called after these activities are completed in Automation Studio. Therefore, it is desirable to integrate Automation Studio with Journey Builder to send emails. Let's see how to set this up together. Settings Create an Automation, and add the Schedule as the starting source. Configure the Schedule to the future time and save it. Remember to save it, otherwise later settings will not work. Add your desired activity, such as SQL queries and filters. This is essential for integrating with Journey. Journey cannot be triggered if no data extension is selected. Create a Journey. Add a data extension as the entry source. Select the data extension used in Step 2. This is important. If you choose a different data extension, you will not be able to integrate with Automation in Step 1. Note: At this point, even if you save the journey and return to Automation, you will not be able to select the journey from the activities. This is because there is no "Journey" option for Automation activities. But, wait a moment. Now, I'm going to show you some magic! ![Step3-2](/assets/blog/authors/Robb/20240319/03-2.png =300x) In Journey, click "Schedule" at the bottom of the canvas, select "Automation" as the schedule type, and then click "Select." Can't choose "Automation" because it is inactive? Why don't you go back to Step 1 and save Automation? In "Schedule Summary," click "Schedule Settings" and select the Automation you created in Step 1. Edit a contact's rating to specify the records to be processed by Journey. Add email, and flow control, etc. Your setup is now complete! Let’s validate and activate the journey. Don't worry, emails will not be sent immediately after activation, as the timing of the transmission depends on Automation. Back to the Automation, now the Journey was added to Automation on its own, right? Don't you think it's amazing? Finally, summon the courage to activate your Automation. See, every time Automation is triggered, Journey will also be triggered! Thank you for reading. Here, I am going to take a break with a cup of coffee. I hope you will all refresh yourselves with your favorite drink and enjoy the automatic email delivery. Happy marketing! Source: https://www.softwebsolutions.com/resources/salesforce-integration-with-marketing-automation.html
アバター
Introduction Hello everyone! I am Kin-chan from the KINTO Technologies' Development Support Division. I usually work as a corporate engineer, maintaining and managing IT systems used throughout the company. The other day, I presented the "Study session in the format of case presentations + roundtable discussions, specialized in the corporate IT domain" at the event " KINTO Technologies MeetUp!" 4 case studies for information systems shared by information systems - " In this article, I will introduce the content of the case study presented at that study session, along with supplementary information. The Presentation You can check below for the full presentation material (in Japanese): [An Introduction to AGILE SaaS] The Secrets to Achieving Maximum Results Quickly with Minimum Workload In addition to the slides I used in my presentation, I will provide additional information to clarify any difficult parts and cover topics I couldn't address at the event. Title Selection First of all, I'd like you to examine the title. Many people interpret "Agile" in different ways, making it daunting to include in the title of a presentation. However, I chose to have it anyway because I hope that someone who listened to or saw my presentation might gain insights like "Oh, so this can also be considered Agile" or "It's not such a difficult topic," and inspire them to take new actions. (Of course, the fact that it's an "attractive" keyword was also a factor in my decision.) What I Will vs. Will Not Speak About Today Since I used the keyword "Agile" in the title, I thought it would be good to focus on content that can be linked to the value of Agile software development. If you're interested in hearing more detailed information about processes or the small stories that occurred during projects, please consider joining KINTO Technologies. Background The introduction of IT Service Management (ITSM) tools, which include inquiry and request management, began with the IT team. Due to its relatively smooth implementation, there was a basis for extending it to management departments beyond IT. Before this flow was established, there wasn’t many opportunities to interact with "other managing departments beyond IT" within the company. Personally, I had previous project experiences with many non-IT departments, including before my previous job. So, when I was appointed to drive this project, I felt glad because I thought I could leverage my past experiences. The decision to opt for an Agile approach stemmed from the background of having a rough goal in mind but not having concrete set of requirements or functions established, and wanting to achieve success with minimal workload while still creating value. Instead of a rigidly defined phased implementation (as one would do in a Waterfall model), the Agile approach, which involves iterating through dialogue and course corrections while building minimal viable products, seemed more suitable. I have this slide here that says, "I think it's better to go Agile!" It might seem like we had already decided on Agile from the project's inception, but in reality, it was more like, "Hmm, how should we move forward? Let's start by listening to what the stakeholders have to say." It was after conducting hearings with the Administration Department team members that we gained a sense of, "With them, we could proceed with this style!" which led us to adopt the Agile approach mentioned later. About Agile When someone asks me "What is Agile?" within the company, I typically respond with something like, "It's a state where work progresses by focusing on value while iterating Kaizen (continuous improvement) in short cycles." While those familiar with software development may understand the values and principles outlined in the Agile Manifesto, others might not resonate with it. Lately, I've noticed that explaining Agile has become easier with the publication of books like 'The Agile Kata' and other Agile books targeting non-IT audiences. As for the progress of the project... For the next slides, I made a conscious effort to explain "What makes it Agile?" in a way that links back to the values outlined in the Agile Manifesto as much as possible. The message I wanted to convey with this slide is the establishment of mechanisms to minimize unnecessary communication and facilitate immediate engagement in essential conversations. In typical software development scenarios, one common question might be, "What tasks are currently being performed?" and for clarifying "What do we want to accomplish?" . Given that this is a "SaaS implementation with a certain degree of framework already established," I deemed it more appropriate to explore "effective usage based on the existing framework" rather than "defining requirements based solely on current tasks." Furthermore, one of the strengths of low-code tools is their significantly lower cost for the build-break-fix process in the initial stages. This made it feasible to create a prototype providing minimal value before the first meeting. As a result, instead of starting the conversation with "So, what kind of product do you want to create?" during the initial meeting, we were able to begin with discussions focused on specific functional prototypes, asking questions like "How about a system that works like this? Do you notice any issues with it?" This allowed us to engage in discussions centered around tangible, functional examples right from the start. These aspects focus on the following values in the Manifesto for Agile Software Development: Individuals and interactions over processes and tools Working software over comprehensive documentation What I wanted to convey in this slide is “to create value at short intervals, get feedback, and create a mechanism to provide a system that makes sense”. One common aspect in meetings is taking points as homework for internal discussion later. For example, "consider what kind of menu structure is good" or "discuss internally what kind of process flow is best". But this time, rather than leaving such "takeaway considerations" entirely to the other party, we opted to participate in these discussions by being invited as guests to their alignment sessions. By doing so, we can immediately address any questions, concerns, or discrepancies that arise during the conversation, and we can swiftly provide answers or even start making system adjustments on the spot. As a result, despite being in a "separate discussion" setting, we were able to progress not only with specification changes based on the discussions but also with actual functional improvements. Furthermore, I mentioned here that "significant specification changes emerged at this point," but what I meant was that we were able to detect a situation where it was more beneficial to essentially "start over" rather than modify what has been done so far. Of course, this meant discarding what has been built till that point. However, by actively participating in the discussions, we were able to fully understand the necessity and value of rebuilding. This allowed us to make this decision with confidence. These aspects focus on the following values in the Manifesto for Agile Software Development: Customer collaboration over contract negotiation Responding to change over following a plan Finishing the project Through this project, one of the most significant gains I feel I've obtained is "trust." It's just my unilateral opinion, but I feel that I've contributed to creating opportunities where people think, "Working with this person leads to good results," and "I'd like to consult with them again if there's something next time." Certainly, I believe there are many approaches to project management that can yield positive results, not just those aligned with Agile practices like the example we discussed. But If you ever find yourself stuck on how to proceed, I recommend considering the values inherent in Agile as a reference and trying to adjust your actions just a little: Envision your desired outcomes and apply small changes to achieve them. Observe the results of those small changes in behavior and use that feedback to further refine your vision of the desired outcome. Continue to make further small changes in your behavior. Once you're able to repeat this process, it's safe to say you've adopted an 'Agile' mindset. Conclusion As mentioned at the beginning, I hope to inspire anyone who has gone through this material to gain insights such as "Oh, this is also Agile" or "It's not such a difficult topic". I would be happy if this can serve as encouragement for your next actions.
アバター
Introduction Greetings, this is Morino from KINTO Technologies. On June 29th (Thursday) to the 30th (Friday) in 2023, I attended with a colleague the Cyber Security Symposium Dogo 2023 held in Matsuyama City, Ehime Prefecture. The purpose of the event is to recognize the importance of countermeasures against cyberattacks as digitalization accelerates with the development of society that coexists with the coronavirus, and to fight cyberattacks with the power of local security. The purpose of the seminar was to deepen discussions on policy trends, technological trends, and examples of cyber attacks. We were able to get a lot of inspiration and knowledge from the lectures and other participants. When I arrived at Matsuyama Airport, we were greeted by Mican, a mascot promoting the image of Ehime Prefecture. There was also a mikan (mandarin orange) juice tower and a mikan juice faucet. There were many interesting talks and presentations at the symposium, but I would like to introduce some of the ones that left an impression on me. (See a full list of talks and presentations here .) Japan's Cybersecurity Policy First of all, Mr. Tomoo Yamauchi (Director-General, Cybersecurity Office, Ministry of Internal Affairs and Communications) gave a keynote speech on "Japan's Cybersecurity Policy." Under the theme of "leaving no one behind," Mr. Yamauchi explained the country's efforts to secure a free, fair and safe cyberspace. This included changes in targets during Cybersecurity Awareness Month, and improvements in cloud usage within government agencies, etc. I felt that the theme of “leaving no one behind” was wonderful. Security (Security + Community) and Generated AI As for the night session, I listened to a lecture on "Security (Security + Community) and Generated AI" by Mr. Tsuneyoshi Hamamoto (IT Integration Department, Energia Communications, Inc.) and Mr. Matcha Daifuku (Risk Consulting Department, luck Technology, Inc.). Mr. Hamamoto explained the concept of secuminity , a term coined by combining security with community. Secuminity is where people concerned with security interact, share knowledge and experiences, as well as collaborate and learn from each other online and offline. I understood it to be a community that contributes to improving security. Next, he shared his knowledge on Generative AI. The presentation materials are available here (in Japanese). From a security perspective, while we had expectations for its use in detecting suspicious activity from logs, we were also concerned about its use in generating sophisticated phishing emails. Student Research Award Winning Research Presentation Finally, on the second day, outstanding students presented their research findings at the Student Research Award Presentation. I voted for the presentation titled "Proposal of KP-less Method for Individual Cyber Exercises Based on Tabletop Role-Playing Games (TRPG)" as it was the one I found most compelling in the symposium, whereas participants voted for the best presentation. This presentation was made by Ms. Erika Fujimoto (Graduate School of Regional Design and Development, University of Nagasaki), who proposed an exercise method for individuals based on TRPG (Tabletop Role-Playing Games) in “KP-less” style as a cyber exercise scenario. “KP-less” means that there is no one in the TRPG to take on the role of the Game Master, the organizer. I was drawn to it due to my ongoing interest in information security education as a security officer. When I was in elementary school and junior high school, game books became very popular. So I understood that it was an exercise incorporating that method. Summary These were some of the talks and presentations at the Cybersecurity Symposium Dogo 2023. There were many other useful lectures and presentations. The symposium was a valuable opportunity not only to learn about the latest insights on cybersecurity, but also to interact with people who are interested in the same field. I want to thank the organizers, sponsors, and attendees.
アバター
Introduction Hello! Thank you for reading! My name is Nakamoto and I develop the front end of KINTO FACTORY ('FACTORY' in this article), a service that allows you to upgrade your current car. In this article, I would like to introduce a method of how to detect errors that occur in clients such as browsers using AWS CloudWatch RUM. Getting Started The reason why we introduced it was due to an enquiry we received by our Customer Center (CC), where a user tried to order products from the FACTORY website, only to encounter an error where the screen did not transition. This prompted an investigation request. I immediately parsed the API log and checked if there were any errors, but I could not find anything that would lead to an error. Next, I checked what kind of model and browser was being used to access the front end. When examining the access logs from Cloud Front, I looked into the access of the relevant user and checked the User-Agent where I could see: Android 10; Chrome/80.0.3987.149 It was accessed from a relatively old Android device. With that in mind, while analyzing the source of the page where the problem occurred, a front end development team member advised that replaceAll in JavaScript might be the culprit... This function requires compatibility with Chrome version 85 or higher... (Since FACTORY recommends using the latest version of each browser, we hadn't tested cases with old versions such as this case in QA.) *Other members of the team also told me that you can easily search for functions here to see which browsers and versions are supported! Until now, monitoring in FACTORY has detected errors in the BFF layer and notified PagerDuty and Slack, but it has not been possible to detect errors in the client-side, so it was the first time we noticed them through communication from customers. If we continued as-is, we would not be able to notice such errors on the client side unless we received customer feedback, so we decided to take countermeasures. Detection Method Originally, FACTORY's frontend had been loading client.js from AWS's CloudWatch RUM (Real-time User Monitoring). However, this function was not being used for anything in particular (user journeys, etc. are analyzed separately with Google Analytics), so it was a bit of a waste. As I investigated, I learned that RUM allows JavaScript to send events to CloudWatch on a client such as a browser. So using this mechanism, I decided to create a system to send and detect custom events when some kind of error occurs. Notification Method The general flow of notifications are as follows: When an error is detected in the browser, CloudWatch RUM sends a custom event with the error description in the message window.crm("recordEvent", { type: "error_handle_event", data: { /* Information required for analysis. The contents of the exception error */ }, }); Cloud Watch Alerm detects the above events and sends the error details via SNS when the event occurs The above SNS notifies SQS, Lambda picks up the message and notifies the error to OpenSearch (this mechanism uses the existing API error detection and notification mechanism) After Implementation After implementing this mechanism in the production environment and operating it for several months, I can luckily say that critical issues, such as the JavaScript error that resulted in its introduction, have not occurred. However, I have been able to detect cases where errors occur due to unintended access from search engine crawlers and bots, and I have become aware of accesses that I did not pay particular attention to until I introduced it, so it became a reminder of the importance of monitoring and being vigilant. Conclusion In order to enable the best online purchase experiences on websites such as FACTORY, it's very important to prevent as many errors as possible (such as problems when buying items, viewing pages, etc.). However, there is unfortunately a limit as to how much we can guarantee that it works on all customers' devices and browsers. That is why, if an error occurs, it is necessary to show easy to understand messages for the customers (with what they should do next), and a mechanism in place for us, the developers on the operation side, so that we can quickly identify the occurrence and details of the problem. I would like to continue using different tools and mechanisms to ensure stable website operation.
アバター
はじめに こんにちは!KTCグローバル開発部に所属している崔です。 現在 KINTO FACTORY の開発に参加しており、今年はチームメンバーと一緒にWebサービス内のメモリリークの原因を調査し、特定した問題点を修正して解決しました。 このブログでは、調査アプローチ、使用したツール、調査結果、そしてメモリリークに対処するための措置について詳しく説明します。 背景 私たちが現在開発・運用しているKINTO FACTORYサイトには、AWSのECS上で動作しているWebサービスがあります。 このサービスでは、当社が開発・運営している認証サービスである会員PF(Platform)と決済サービスである決済PF(Platform)を利用しています。 今年1月に、このWebサービスでECSタスクのCPU使用率が異常に高まり、一時的にサービスにアクセスできない事態が発生しました。 この際、KINTO FACTORYサイトで特定の画面遷移や操作を行うと404エラーやエラーダイアログが表示されるインシデントが発生しました。 昨年7月にも類似のメモリリークが発生しており、Full GC(Old領域のクリア)が頻繁に発生し、それに伴うCPU使用率の増加が原因であることが判明しました。 これらの事象が発生した場合、一時対策としてECSタスクの再起動で解決できますが、メモリリークの根本原因を究明し、解決する必要があります。 本記事では、これらの事例を踏まえ、現象の調査・分析とそれに基づいた解決策を記載しています。 調査内容と結果の要約 調査内容 最初に、本件で発生した事象の詳細を分析すると、WebサービスのCPU使用率が異常に高まるのは、Full GC(Old領域のクリア)が頻繁に発生することで起きた問題あることが分かりました。 通常、Full GCが一度行われると、多くのメモリが解放され、しばらくの間は再度発生することはありません。 にもかかわらず、Full GCが頻繁に発生するのは、使用中のメモリが過剰に消費されている可能性が高く、これはメモリリークが発生していることを示唆しています。 この仮説を検証するために、メモリリークが発生した期間中に多く呼ばれたAPIを中心に長時間APIを呼び出し続け、 メモリリークを再現させました。その後、メモリの状況やダンプを分析して原因を探ります。 調査に使用したツールは以下の通りです: JMeter でのAPIのトラフィックシミュレーション VisualVM と Grafana を用いたメモリ状態の監視(ローカル環境および検証環境) OpenSearch で頻繁に呼び出されるAPIのフィルタリング また、本文によく現れているメモリの「Old領域」について以下の通りに簡単に説明します: Javaのメモリ管理では、ヒープ領域がYoung領域とOld領域に分かれています。 Young領域には新しく作成されたオブジェクトが格納され、ここで一定期間存続したオブジェクトはSurvivor領域を経てOld領域に移動します。 Old領域には長期間存続するオブジェクトが格納され、ここがいっぱいになるとFull GCが発生します。 Survivor領域はYoung領域内の一部で、オブジェクトがどれだけ長く生存しているかを追跡します。 調査結果 外部サービスのリクエスト時に接続インスタンスが大量に新規作成されており、メモリが無駄に占有されていることによるメモリリークが発生していました。 調査内容の詳細 1. 呼び出し回数が多かったAPIの洗い出し 最初に、多く呼ばれている処理とメモリ使用状況を知るため、OpenSearchでAPI呼び出しサマリのダッシュボードを作成しました。 2. 洗い出ししたAPIをローカル環境で30分間呼び出し続け、結果を分析 調査方法 メモリリークをローカル環境で再現させ、ダンプを取り原因分析を行うため、以下の設定でJMeterを使用してAPIを30分間呼び出し続けました。 JMeterの設定 スレッド数:100 Ramp-up期間(※):300秒 テスト環境 Mac OS Javaバージョン:openjdk 17.0.7 2023-04-18 LTS Java設定:-Xms1024m -Xmx3072m ※Ramp-up期間とは:設定したスレッド数を何秒以内に起動・実行するかの指定される秒数です。 結果と仮説 メモリリークは起きませんでした。実際の環境と異なるためメモリリークが再現しなかったと考えました。実際の環境はDockerで動作しているため、アプリケーションをDockerコンテナに入れて再度検証することにしました。 3. Docker環境で再度APIを呼び出し続け、結果を分析 調査方法 メモリリークをローカル環境で再現させるため、以下の設定でJMeterを使用してAPIを1時間呼び出し続けました。 JMeterの設定 スレッド数:100 Ramp-up期間:300秒 テスト環境 ローカルDockerコンテナ(Mac上) メモリ制限:4 GB CPU制限:4コア 結果 ローカル環境で環境を変えてもメモリリークは起きませんでした。 仮説 実際の環境と異なる 外部APIを呼び出していない 長時間にわたるAPI呼び出しで少しずつメモリが蓄積される可能性がある 大きすぎるオブジェクトがSurvivorに入らず、Old領域に入ってしまう可能性がある やはりローカル環境では再現できないため、本番環境に近い検証環境で再度検証することにしました。 4. 検証環境で外部API関連を長時間叩き続け、結果を分析 調査方法 メモリリークを検証環境で再現させるため、以下の設定でJMeterを使用してAPIを呼び出し続けました。 呼び出し対象API:それぞれ計7本 継続期間:5時間 ユーザー数:2 ループ:200(1000を予定していたが、実際のOrderは少ないため200に変更) Factory API合計呼び出し回数:4000 影響がある外部PF:会員PF(1600回)、決済PF(200回) 結果 Full GCが発生せず、メモリリーク現象は再現しませんでした。 仮説 ループ回数が少なく、メモリ使用量が増加しているが上限に達していないためFull GCが発動されなかった。呼び出し数を増やし、メモリ上限を下げてFull GCを発生させるようにします。 5. メモリ上限を下げ、APIを長時間叩き続ける 調査方法 検証環境でメモリ上限を下げて、JMeterで会員PF関連APIを4時間呼び出し続けました。 時間:4時間 API:前回と同じ7つのAPI 頻度:12ループ/分(5秒/ループ) 会員PF呼び出し頻度:84回/分 4時間の会員PF呼び出し回数:20164回 ダンプ取得設定: export APPLICATION_JAVA_DUMP_OPTIONS='-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/app/ -XX:OnOutOfMemoryError="stop-java %p;" -XX:OnError="stop-java %p;" -XX:ErrorFile=/var/log/app/hs_err_%p.log -Xlog:gc*=info:file=/var/log/app/gc_%t.log:time,uptime,level,tags:filecount=5,filesize=10m' ECSのメモリ上限設定: export APPLICATION_JAVA_TOOL_OPTIONS='-Xms512m -Xmx512m -XX:MaxMetaspaceSize=256m -XX:MetaspaceSize=256m -Xss1024k -XX:MaxDirectMemorySize=32m -XX:-UseCodeCacheFlushing -XX:InitialCodeCacheSize=128m -XX:ReservedCodeCacheSize=128m --illegal-access=deny' 結果 メモリリークの再現に成功し、ダンプを取得できました。 IntelliJ IDEAでダンプファイルを開くと、メモリの詳細情報を見ることができます。 ダンプファイルを詳しく分析したところ、外部API関連部分でリクエストごとに大量のオブジェクトが新規作成されていること、Util系クラスの一部がSingletonとして扱われていないことが判明しました。 6. Heap Dumpの分析結果 reactor.netty.http.HttpResources 内に HashMap$Node が5,410個作成されており、352,963,672バイト(83.09%)を専有していることが分かりました。 メモリリーク発生箇所特定 reactor.netty.resources.PooledConnectionProvider 内の channelPools(ConcurrentHashMap) でリークが発生しており、格納と取得のロジックに着目しました。 poolFactory(InstrumentedPool) 取得箇所 remote(Supplier<? extends SocketAddress>) と config(HttpClientConfig) から取得した channelHash で holder(PoolKey) を作成 holder(PoolKey) で channelPools から poolFactory(InstrumentedPool) を取得し、同様のキーが存在すれば返し、なければ新規作成 リークの原因は、同一設定でも同一キーと判断されないことです: reactor.netty.resources.PooledConnectionProvider public abstract class PooledConnectionProvider<T extends Connection> implements ConnectionProvider { ... @Override public final Mono<? extends Connection> acquire( TransportConfig config, ConnectionObserver connectionObserver, @Nullable Supplier<? extends SocketAddress> remote, @Nullable AddressResolverGroup<?> resolverGroup) { ... return Mono.create(sink -> { SocketAddress remoteAddress = Objects.requireNonNull(remote.get(), "Remote Address supplier returned null"); PoolKey holder = new PoolKey(remoteAddress, config.channelHash()); PoolFactory<T> poolFactory = poolFactory(remoteAddress); InstrumentedPool<T> pool = MapUtils.computeIfAbsent(channelPools, holder, poolKey -> { if (log.isDebugEnabled()) { log.debug("Creating a new [{}] client pool [{}] for [{}]", name, poolFactory, remoteAddress); } InstrumentedPool<T> newPool = createPool(config, poolFactory, remoteAddress, resolverGroup); ... return newPool; }); channelPoolsは名称の通りChannel情報を保持しているオブジェクトで同様のリクエストが来た際に再利用を行っている。 PoolKeyはホスト名と接続設定のHashCodeを元に作成され、更にそのHashCodeが使用される。 channelHash 取得箇所 reactor.netty.http.client.HttpClientConfig の階層 Object + TransportConfig + ClientTransportConfig + HttpClientConfig PooledConnectionProviderに渡されるLambda式 com.kinto_jp.factory.common.adapter.HttpSupport L5 ここで定義されたLambda式が PooledConnectionProvider に config#doOnChannelInit として引き渡される。 abstract class HttpSupport { ... private fun httpClient(connTimeout: Int, readTimeout: Int) = HttpClient.create() .proxyWithSystemProperties() .doOnChannelInit { _, channel, _ -> channel.config().connectTimeoutMillis = connTimeout } .responseTimeout(Duration.ofMillis(readTimeout.toLong())) ... } 7. channelPools取得時の挙動(図解) キーが一致するケース(正常) channelPools に存在する情報がキーとなり、 InstrumentedPool が再利用される。 キーが不一致のケース(正常) channelPools に存在しない情報がキーとなり、 InstrumentedPool が新規作成される。 今回発生したケース(異常) channelPools に存在する情報がキーとなるが、 InstrumentedPool が再利用されず新規作成されてしまう。 問題箇所の修正と検証 修正箇所 問題となっているLambda式をプロパティ呼び出しに書き換える 修正前 abstract class HttpSupport { ... private fun httpClient(connTimeout: Int, readTimeout: Int) = HttpClient.create() .proxyWithSystemProperties() .doOnChannelInit { _, channel, _ -> channel.config().connectTimeoutMillis = connTimeout } .responseTimeout(Duration.ofMillis(readTimeout.toLong())) ... } 修正後 abstract class HttpSupport { ... private fun httpClient(connTimeout: Int, readTimeout: Int) = HttpClient.create() .proxyWithSystemProperties() .option(ChannelOption.CONNECT_TIMEOUT_MILLIS, connTimeout) .responseTimeout(Duration.ofMillis(readTimeout.toLong())) ... } 検証 前提条件 MembersHttpSupport#members(memberId: String) を1000回呼び出す。 PooledConnectionProvider#channelPools に格納されているオブジェクトの件数を確認する。 修正前の結果 修正前の状態で実行したところ、 PooledConnectionProvider#channelPools に1000個のオブジェクトが格納されていることが分かりました(リークの原因)。 修正後の結果 修正後の状態で実行したところ、 PooledConnectionProvider#channelPools に1個のオブジェクトが格納されていることが分かりました(リーク解消)。 まとめ 今回の調査では、KINTO FACTORYのWebサービスにおけるメモリリークの原因を特定し、適切な修正を行うことで問題を解決することができました。特に、外部API呼び出し時に大量のオブジェクトが新規作成されていたことがメモリリークの原因であると判明し、Lambda式をプロパティ呼び出しに変更することで解消されました。 このプロジェクトを通じて、以下の重要な教訓を得ることができました: 持続的なモニタリング :ECSサービスのCPU使用率の異常やFull GCの頻繁な発生を通じて、継続的なモニタリングの重要性を認識しました。システムのパフォーマンスを常に監視することで、問題の兆候を早期に察知し、迅速に対処することができます。 早期の問題特定と対策 :Webサービスのメモリリークを疑い、長時間APIを呼び出してメモリ状況を再現させることで、外部サービスのリクエスト時に大量のオブジェクトが新規作成されていることを特定しました。これにより、問題の原因を迅速に特定し、適切な修正を実施できました。 チームワークの重要性 :複雑な問題に対処する際には、チーム全員が協力して取り組むことが成功への鍵となります。今回の修正と検証は、開発チーム全員の協力と努力によって達成されました。特に、調査、分析、修正、検証といった各ステップでの協力が成果を上げました。 調査フェーズでは、多くの苦労がありました。例えば、メモリリークの再現がローカル環境では難しく、実際の環境に近い検証環境で再度検証を行う必要がありました。また、外部APIを長時間にわたって呼び出し続けることで、メモリリークを再現し、その原因を特定するのに多くの時間と労力を要しました。しかし、これらの困難を乗り越えることで、最終的には問題を解決することができ、大きな達成感を得ることができました。 この記事を通じて、システムのパフォーマンス向上と安定性を維持するための実践的なアプローチや教訓を共有しました。同様の問題に直面している開発者の方々の参考になれば幸いです。 以上です〜
アバター
To Be Event Staff at try! Swift Tokyo 2024 With my childcare duties now more manageable, I decided to get more involved in activities and signed up for try! Swift Tokyo 2024! When I noticed they were looking for staff for try! Swift Tokyo 2024, I took the leap and submitted my application. To tell the truth, I had never been to try! Swift Tokyo, even as a participant, so I applied without really knowing what the atmosphere of the venue would be like😅 So in this article, I will share my experiences as a staff member at this event. What is try! Swift Tokyo 2024 try! Swift Tokyo 2024, held in March 2024, is a conference for iOS developers in Japan. Since its inception in 2016, it has consistently served as the largest gathering for professionals in the iOS development community. After a long pause due to COVID-19, this year marked its return for the first time in five years. Please visit the official website for more information. In my experience, iOSDC, another event that is also famous for its large iOS conferences, is largely driven by open speaker requests within Japan to shape the event's schedule. On the other hand, try! Swift Tokyo sources proposals internationally and invites renowned engineers from abroad to enrich its schedule, so there were many situations where we needed to communicate in English. Staff Activities This time, on the day, I worked as a staff member on the organizing side. It was my first time working behind the scenes, but it was a very exciting and enjoyable experience. One week before the event, all the staff gathered for a meeting where responsibilities were assigned. I was assigned to manage the venue, and was asked to do the following: Set up the venue Guiding the participants Venue guidance Handing out lunch boxes Collect garbage Venue teardown Other tasks within the venue ![](/assets/blog/authors/HiroyaHinomori/IMG_2773.jpg =400x) I usually spend most of my time writing code, so I was worried about whether my body could handle three days of physical work. However, I found it surprisingly refreshing to be active and interact with people In particular, I enjoyed talking to the attendees during reception and venue guidance. With many speakers and participants from abroad, try! Swift Tokyo needed English communication, which made me very aware of my limited language skills. Given that it was the first one in five years, there were many newcomers, myself included. Despite the occasional uncertainty about how to do things, everyone was able to work together and enjoy the activities, ending our first day successfully. ![](/assets/blog/authors/HiroyaHinomori/IMG_2784.jpg =400x) On the second day during the venue teardown, it was nice to see that some people had left their signatures on the sponsor boards that remained👍 During the after party which followed the teardown, the participants and staff were able to have fun together, and it was very nice to meet new people there. On the third day, a workshop was conducted for participants, and witnessing their enthusiasm was truly inspiring, leaving me feeling uplifted💪 I had some free time too, so I took the opportunity to exchange information with other staff members. The churrasco I ate at the post-teardown celebration was also delicious😋 Conclusion ![](/assets/blog/authors/HiroyaHinomori/IMG_2804.jpg =400x) I wanted to take more pictures, but I regret that I couldn't because I was so focused on work... By joining as a staff member, I was able to encounter new people and experiences that I never would have gotten by joining as a participant, which made me feel a sense of fulfillment. I feel that it was a great experience. I'd love to join as staff again if I get the chance next time! If you are reading this article, I encourage you to challenge yourself and consider being a conference staff member as well! Finally, I'd like to say THANK YOU to all the organizers, speakers, and other participants!!! See you again👍
アバター
はじめに こんにちは。KINTOテクノロジーズモバイルアプリケーション開発グループの Rasel です。私は現在、 my route Androidアプリの開発に取り組んでいます。 my route は、外出時に利用するマルチモーダルアプリで、目的地の情報収集、地図上のさまざまな場所の探索、デジタルチケットの購入、予約、乗車料金の支払い処理などを行うことができます。 いまやモバイルアプリは私たちの日常生活に欠かせないものです。我々のようなエンジニアは、AndroidとiOSアプリをそれぞれ別で作成するため、両方のプラットフォームを開発するためにはダブルコストが発生します。これらの開発コストを削減するためにReact Native、Flutterなど、様々なクロスプラットフォームフレームワークが登場しました。 しかし、クロスプラットフォームアプリのパフォーマンスには常に課題があります。ネイティブアプリのようなパフォーマンスではありません。また、プラットフォーム固有の新機能がAndroidやiOSからリリースされると、フレームワーク開発者からサポートを受けなければいけない場合があり、さらに時間がかかります。 そこで Kotlin Multiplatform (KMP) が助けになります。ネイティブアプリ並みのパフォーマンスで、プラットフォーム間で共有するコードを自由に選択できるのです。KMPでは、Androidのネイティブ第一言語であるKotlinでAndroidアプリが開発されていて、完全にネイティブなので、パフォーマンス上の問題はほとんどありません。iOSの部分は Kotlin/Native を使用しており、他のフレームワークと比較して、ネイティブアプリとして開発されたものに近いパフォーマンスがあります。 本記事では、SwiftUIコードをCompose Multiplatformと統合する方法を紹介します。 KMP(モバイルプラットフォームではKMMとしても知られています)では、プラットフォーム間で共有するコードの量と、ネイティブアプリに実装するコードを自由に選択でき、プラットフォームのコードとシームレスに統合されます。以前は、ビジネスロジックのみをプラットフォーム間で共有できましたが、今では UI コードも共有できるようになりました。 Compose Multiplatform においても、 UIコードの共有が可能になりました。下にある以前の記事を読むと、モバイルアプリ開発におけるKotlin MultiplatformとCompose Multiplatformの使用法をよりよく理解できます。 Kotlin Multiplatform Mobile (KMM)を使ったモバイルアプリ開発 Kotlin Multiplatform Mobile(KMM)およびCompose Multiplatformを使用したモバイルアプリケーションの開発 それでは、始めましょう! 概要 我々はUI開発でCompose Multiplatformを使用するKMPを用いてアプリ開発をしています。今回はSwiftUIをCompose Multiplatformに統合する方法を示すため、とてもシンプルなGeminiチャットアプリを使用します。また、チャットでのユーザーのクエリへの返信には、Googleの Gemini Pro APIを使用します。デモすることが目的なので、シンプルにするためにも、テキストメッセージのみが許可されるよう無料版の API を使用します。 Compose と SwiftUI がどのように連携するか まず、最初に大事なことから。Jetbrainの Kotlin Multiplatform Wizard を使用してKMPプロジェクトを作成します。このウィザードには、必要になるKMPの基本的なセットアップと、Compose Multiplatformと、いくつかの初期SwiftUIコードが付属しています。 ![Kotlin MultiplatformWizard](/assets/blog/authors/ahsan_rasel/kmp_wizard.png =450x) Kotlin Multiplatform Mobile pluginをインストールして、 Android Studio IDE を使用し、プロジェクトを作成することもできます。 ComposeとSwiftUIがどのように連携するかをデモしてみます。ComposableコードをiOS に組み込むには、Composableコードを ComposeUIViewController 内にラップする必要があります。 ComposeUIViewController は UIKit から UIViewController の値を返し、その中にComposeコードの組み立てをコンテンツパラメータとして含めることができます。 例: // MainViewController.kt fun ComposeEntryPoint(): UIViewController { return ComposeUIViewController { Column( modifier = Modifier.fillMaxSize(), horizontalAlignment = Alignment.CenterHorizontally, verticalArrangement = Arrangement.Center ) { Text(text = "Hello from Compose") } } } 次に、この関数を iOS 側から呼び出します。そのためには、SwiftUI のComposeコードを表す構造が必要です。以下のコードは、共有モジュールであるUIViewController コードを SwiftUI ビューに変換します。 // ComposeViewControllerRepresentable.swift struct ComposeViewControllerRepresentable :UIViewControllerRepresentable { func updateUIViewController(_ uiViewController:UIViewControllerType, context:Context) {} func makeUIViewController (context:Context)-> some UIViewController { return MainViewControllerKt.ComposeEntryPoint() } } ここで、 MainViewControllerKt.ComposeEntryPoint() の名前を詳しく見てみましょう。これが Kotlin から生成されたコードになります。そのため、共有モジュール内のファイル名とコードによって異なる場合があります。共有モジュール内のファイル名が Main.ios.kt で、 UIViewController returning function nameが ComposeEntryPoint() の場合、 Main_iosKt.ComposeEntryPoint() のように呼び出す必要があります。そのため、コードによって異なります。 次に、この ComposeViewControllerRepresentable をコード ContentView() の内部からインスタンス化します。これで準備は完了です。 // ContentView.swift struct ContentView:View { var body: some View { composeViewControllerRepresentable () .ignoresSafeArea (.all) } } コードを見てわかるように、このComposeコードは SwiftUI 内のどこでも使用でき、SwiftUI 内で好きなようにサイズを制御できます。UI は次のようになります: ![Hello from Swift](/assets/blog/authors/ahsan_rasel/swiftui_compose_1.png =250x) SwiftUI のコードをCompose内に統合したい場合は、 UIView でラップする必要があります。SwiftUIのコードをKotlinで直接記述することはできないため、Swiftで記述してKotlin関数に渡す必要があります。これを実装するために、 関数 ComposeEntryPoint() に、 UIView タイプの引数を追加してみましょう。 // MainViewController.kt fun ComposeEntryPoint(createUIView: () -> UIView): UIViewController { return ComposeUIViewController { Column( modifier = Modifier.fillMaxSize(), horizontalAlignment = Alignment.CenterHorizontally, verticalArrangement = Arrangement.Center ) { UIKitView( factory = createUIView, modifier = Modifier.fillMaxWidth().height(500.dp), ) } } } そして CreateUIView を以下のような Swift コードへ渡します。 // ComposeViewControllerRepresentable.swift struct ComposeViewControllerRepresentable : UIViewControllerRepresentable { func updateUIViewController(_ uiViewController: UIViewControllerType, context: Context) {} func makeUIViewController(context: Context) -> some UIViewController { return MainViewControllerKt.ComposeEntryPoint(createUIView: { () -> UIView in UIView() }) } } さて、他のViewを追加したい場合は、以下のように親ラッパー UIView を作成してください: // ComposeViewControllerRepresentable.swift private class SwiftUIInUIView<Content: View>: UIView { init(content: Content) { super.init(frame: CGRect()) let hostingController = UIHostingController(rootView: content) hostingController.view.translatesAutoresizingMaskIntoConstraints = false addSubview(hostingController.view) NSLayoutConstraint.activate([ hostingController.view.topAnchor.constraint(equalTo: topAnchor), hostingController.view.leadingAnchor.constraint(equalTo: leadingAnchor), hostingController.view.trailingAnchor.constraint(equalTo: trailingAnchor), hostingController.view.bottomAnchor.constraint(equalTo: bottomAnchor) ]) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } } 次に、それを ComposeViewControllerRepresentable に追加し、必要に応じてViewを追加します。 // ComposeViewControllerRepresentable.swift func makeUIViewController(context: Context) -> some UIViewController { return MainViewControllerKt.ComposeEntryPoint(createUIView: { () -> UIView in SwiftUIInUIView(content: VStack { Text("Hello from SwiftUI") Image(systemName: "moon.stars") .resizable() .frame(width: 200, height: 200) }) }) } 出力は次のようになります: ![Hello from Swift with Image](/assets/blog/authors/ahsan_rasel/swiftui_compose_2.png =250x) この方法では、共有の合成可能なコードに、好きなだけSwiftUIコードを追加できます。 また、 UIKit コードをCompose内に統合したい場合、中間コードを自分で作成する必要はありません。Compose Multiplatformが提供するComposable関数 UIKitView () を使用して、その中にUIKitコードを直接追加できます。 // MainViewController.kt UIKitView( modifier = Modifier.fillMaxWidth().height(350.dp), factory = { MKMapView() } ) このコードは iOS ネイティブのマップ画面をCompose内に統合します。 Gemni Chatアプリの実装 それでは、ComposeコードをSwiftUI内に統合して、 Gemini Chat アプリの実装を進めましょう。Jetpack Compose の LazyColumn を使用して、基本的なチャット UI を実装します。Compose Multiplatform内にSwiftUIを統合することが主な目的なので、Composeやデータ、ロジック等、他の部分の実装についてはここでは割愛します。Gemini Pro APIを実装するため、我々はKtorネットワーキングライブラリを利用しました。Ktorの実装についての詳細は、 Creating a cross-platform mobile application のページをご覧ください。 このプロジェクトでは、Compose Multiplatformで全てのUIを実装しました。Compose MultiplatformのTextFieldではiOS側でパフォーマンスに問題があるので、iOSアプリの入力フィールドにのみSwiftUIを使用します。 ComposeEntryPoint() 関数の中にComposeコードを入れてみましょう。これらのコードには、TopAppBarを含むチャットUIとメッセージのリストが含まれています。これには、Androidアプリで使用される入力フィールドの条件付き実装もあります。 // MainViewController.kt fun ComposeEntryPoint(): UIViewController = ComposeUIViewController { Column( Modifier .fillMaxSize() .windowInsetsPadding(WindowInsets.systemBars), horizontalAlignment = Alignment.CenterHorizontally ) { ChatApp(displayTextField = false) } } false を displayTextField に渡したので、iOS バージョンのアプリでは Compose 入力フィールドがアクティブになりません。そして、Android側のTextFieldにはパフォーマンスの問題がないため、Android 実装側からComposable関数をこの ChatApp () のComposable関数を呼び出すと、 displayTextField の値は true で返ってきます。(これはAndroid のネイティブ UI コンポーネントです。) それでは、Swift コードに戻ってSwiftUIで入力フィールドを実装します。 // TextInputView.swift struct TextInputView: View { @Binding var inputText: String @FocusState private var isFocused: Bool var body: some View { VStack { Spacer() HStack { TextField("メッセージを入力する...", text: $inputText, axis: .vertical) .focused($isFocused) .lineLimit(3) if (!inputText.isEmpty) { Button { sendMessage(inputText) isFocused = false inputText = "" } label: { Image(systemName: "arrow.up.circle.fill") .tint(Color(red: 0.671, green: 0.365, blue: 0.792)) } } } .padding(15) .background(RoundedRectangle(cornerRadius: 200).fill(.white).opacity(0.95)) .padding(15) } } } そして、 ContentView 構造体に戻り、以下のように修正します: // ContentView.swift struct ContentView: View { @State private var inputText = "" var body: some View { ZStack { Color("TopGradient") .ignoresSafeArea() ComposeViewControllerRepresentable() TextInputView(inputText: $inputText) } .onTapGesture { // Hide keyboard on tap outside of TextField UIApplication.shared.sendAction(#selector(UIResponder.resignFirstResponder), to: nil, from: nil, for: nil) } } } ここでは ZStack を追加し、その中に TopGradient カラーと、Modifier ignoresSafeArea () を追加して、ステータスバーの色が他の UI の色と一致するようにしました。 次に、共有されたCompose コードのラッパー ComposeViewControllerRepresentable を追加し、メインのチャットUIを実装しました。そして、 TextInputView() というSwiftUIビューも追加しました。これにより、iOSアプリのユーザーにもiOSネイティブコードでスムーズなパフォーマンスが提供することができます。最終的なUIは次のようになります。 Gemini Chat iOS Gemini Chat Android ![Gemini Chat iOS](/assets/blog/authors/ahsan_rasel/swiftui_compose_ios.png =300x) ![Gemini Chat Android](/assets/blog/authors/ahsan_rasel/swiftui_compose_android.png =300x) ここでは、ChatAppのUIコード全体がKMPのCompose MultiplatformでAndroidとiOSの両方に共有され、iOSの入力フィールドのみがSwiftUIにネイティブに統合されています。 このプロジェクトの完全なソースコードは、GitHub で公開リポジトリとして公開されています。 GitHubリポジトリ:Compose MultiplatformにおけるSwiftUI さいごに このように、Kotlin Multiplatformと Compose Multiplatform を使うことで、クロスプラットフォームアプリでのパフォーマンスの問題を解決しながら、ユーザーにネイティブのような操作感と外観を提供できます。また、プラットフォーム間でコードを好きなだけ共有できるため、開発コストも削減できます。Compose Multiplatformでは、デスクトップアプリとコードを共有することもできます。ですから、単一のコードベースをデスクトップアプリだけでなくモバイルプラットフォームでも使用できます。さらに、プラットフォーム間でのコードベース共有を促進するため、Webサポートも進行中です。Kotlin Multiplatform (KMP) のもう1つの大きな利点は、コードを無駄にすることなく、いつでもネイティブ開発に切り替えることができる点です。AS-ISのKMPコード はAndroidネイティブのため、Androidアプリではそのまま利用でき、iOSアプリを別途切り離して開発することができます。また、KMPにすでに実装したものと同じSwiftUIコードを再利用することも可能です。このフレームワークは、高性能のアプリケーションを提供するだけでなく、共有するコードの割合を自由に変更したり、ネイティブ開発にいつでも切り替えたりできます。 本記事はここまでとしますが、KINTOテクノロジーズのテックブログでは今後もおもしろい記事を発信していきます!Happy Coding!
アバター
Introduction Hello! My name is Morimoto, and I am a backend engineer at KINTO Technologies. I am part of the KINTO ONE development Group where I primarily use Java for KINTO ONE. But this time, I would like to introduce a study session of GraphQL that we're conducting separately from our regular work. What is GraphQL? GraphQL is a query language. Unlike other languages such as SQL, GraphQL can interact with multiple data sources, not just a specific one. If the schema is defined on the backend side, the frontend side can freely retrieve the items in the object according to the definition. Unlike the REST API, with GraphQL, you have the flexibility to specify what information the frontend wants to return from the backend. There is no need to get unnecessary information, and there is no need to call the API multiple times to get nested objects. Purpose of the Study Group There were two main purposes: To improve our technical skills To interact beyond our respective teams To Improve Technical Skills We wanted to catch up with new information in addition to the technology we use in our daily work, but each team member felt that it was a high hurdle to overcome alone. For example, our lack of language knowledge could be cited as a barrier. The GraphQL tutorial we decided to follow used Typescript. So we had to learn Typescript first before learning GraphQL. The idea was that by supplementing each other with our different knowledge and experiences, we could overcome challenges and make the learning curve less steep. To Interact Beyond Our Respective Teams We also wanted to make it as an opportunity for members -regardless of group, team, project or different ages-, to interact with each other. Many of us were good friends who already knew each other, but we were determined to get along better by getting together on a regular basis. I also thought that the study session would be an opportunity to learn about new aspects of each other. Content Details Why GraphQL? Those who typically implemented APIs on the backend were struggling with the need to create an API every time a requirement came. Of course, there are times that is faster to process data on the server side, but it is troublesome to increase the number of APIs that return information as it is. As for me, ever since I heard that GraphQL was a good solution for this, I wanted to try it out. Some members already had some experience using GraphQL, but they wanted to understand the overall process flow, so we decided to properly study it together. Tutorials Used for the Study Sessions We chose Apollo GraphQL for the GraphQL library, and used the tutorial linked below. GraphQL Tutorials The reason why we chose it is due to the volume of tutorials available and we felt it was a good introduction. In addition, one of the study group members had used Apollo GraphQL in their work, so we knew there was a track record of being used within the company. Summary of the Study Group Date and Time It was held once a week after 6pm when members had time. Members The group consists of eight young members ranging from 25 to 28 years old. Our background and expertise was diverse, each belonging to different domains such as web application frontend, backend, as well as mobile application frontend and backend. What We Did We completed all five chapters of the tutorial, from Lift-off I to V. It describes the basics of implementing GraphQL. How It Was Conducted and What We Arranged We conducted the study group following the below flow: We opted to go through the tutorials in Mokumoku-kai style. We reinforced our learning by doing presentations to each other of the content from the tutorials we reviewed. We started by holding our series of Mokumoku-Kai . Mokumoku-Kai is a study group method where everyone gathers, sharing questions and ideas when needed, but mainly focuses on their own. As mentioned, some had used Apollo GraphQL before, but none had a complete picture of the process. For that reason, we first completed the same tutorials and then discussed and resolved any points that came up. However, some mentioned they were doubtful if they really understood it, and that maybe it was good to refresh concepts first before moving on. So we decided to present each section to one another on a rotating basis. The presentation format required presenters to understand the tutorial perfectly. At the study session, there were moments where, upon reviewing, we found answers to questions in parts where we had been progressing somewhat aimlessly. During the presentation however, we could ask questions, change the source code and try it out, and there were new discoveries that one would not have found on their own. A glimpse at one of our sessions. In the foreground are boxes of sandwiches prepared for the study group. Conclusion First and foremost, we achieved a deep understanding of GraphQL thanks to these sessions. By using our knowledge and experience to complement each other, we were able to proceed faster and more reliably than anyone could on their own. Having study partners also helped us to persevere through moments when we felt like giving up. We aim to continue with the remaining chapters of Apollo GraphQL tutorials and learn more about other technical topics. We even discussed how we would love to create some kind of application in the process. By exploring the languages, frameworks, and architectures each of us is interested in, I hope to keep getting better with my technical capabilities.
アバター