{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":208145128,"defaultBranch":"main","name":"iree","ownerLogin":"iree-org","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2019-09-12T20:57:39.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/107954215?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1718235191.0","currentOid":""},"activityList":{"items":[{"before":"ca999ca532659d6269007a917dcc21e2b6351975","after":"f667e8c1b70dbe8305b35102be950fa0cb69af08","ref":"refs/heads/shared/sdxl_sprint_2","pushedAt":"2024-06-13T00:06:51.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"antiagainst","name":"Lei Zhang","path":"/antiagainst","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/487928?s=80&v=4"},"commit":{"message":"remove trailing space\n\nSigned-off-by: Bangtian Liu ","shortMessageHtmlLink":"remove trailing space"}},{"before":"f6c93272a8178e1eee80dae4c6f698a32d1d3636","after":"92e5e6e1823512d33469c49342caea6b34adc045","ref":"refs/heads/gh-pages","pushedAt":"2024-06-12T23:53:53.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"iree-github-actions-bot","name":null,"path":"/iree-github-actions-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/62120345?s=80&v=4"},"commit":{"message":"Deployed 2ff4102 with MkDocs version: 1.6.0","shortMessageHtmlLink":"Deployed 2ff4102 with MkDocs version: 1.6.0"}},{"before":"1513871d74e16b8734d01509314bb11aa1378ce4","after":null,"ref":"refs/heads/revert-17536-new-decomposition-attention","pushedAt":"2024-06-12T23:49:57.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"ScottTodd","name":"Scott Todd","path":"/ScottTodd","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/4010439?s=80&v=4"}},{"before":"71c07faba52f527b08abc73c652ba140f1d8aa54","after":"2ff4102aba9e878f729840da66a44fe4bd3c8790","ref":"refs/heads/main","pushedAt":"2024-06-12T23:49:57.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"ScottTodd","name":"Scott Todd","path":"/ScottTodd","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/4010439?s=80&v=4"},"commit":{"message":"Revert \"[LinalgExt] Add online_attention op\" (#17658)\n\nReverts iree-org/iree#17536\r\n\r\nThis caused `sdxl-scheduled-unet-3-tank` to hit timeouts when compiling\r\nfor cpu:\r\nhttps://github.com/iree-org/iree/actions/runs/9484305572/job/26134004282","shortMessageHtmlLink":"Revert \"[LinalgExt] Add online_attention op\" (#17658)"}},{"before":null,"after":"1513871d74e16b8734d01509314bb11aa1378ce4","ref":"refs/heads/revert-17536-new-decomposition-attention","pushedAt":"2024-06-12T23:33:11.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"ScottTodd","name":"Scott Todd","path":"/ScottTodd","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/4010439?s=80&v=4"},"commit":{"message":"Revert \"[LinalgExt] Add online_attention op (#17536)\"\n\nThis reverts commit abf008703c9c57d755bbea1197d9d8f062b0392e.","shortMessageHtmlLink":"Revert \"[LinalgExt] Add online_attention op (#17536)\""}},{"before":"8e0dcd94a014cb70b43f0ace79daff4163b5eaf4","after":"f6c93272a8178e1eee80dae4c6f698a32d1d3636","ref":"refs/heads/gh-pages","pushedAt":"2024-06-12T22:18:46.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"iree-github-actions-bot","name":null,"path":"/iree-github-actions-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/62120345?s=80&v=4"},"commit":{"message":"Deployed 71c07fa with MkDocs version: 1.6.0","shortMessageHtmlLink":"Deployed 71c07fa with MkDocs version: 1.6.0"}},{"before":"0a561c47c3dc77ce038aef0ca9bf8d8f02ff2f2a","after":"71c07faba52f527b08abc73c652ba140f1d8aa54","ref":"refs/heads/main","pushedAt":"2024-06-12T22:14:42.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"hanhanW","name":"Han-Chung Wang","path":"/hanhanW","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6949191?s=80&v=4"},"commit":{"message":"[CPU] Signal errors if there are large vectors. (#17620)\n\nThe default ratio is 512 because the codegen could generate ops with\r\n`vector<16x16x16xf32>` types in some tests. It should be revisited after\r\nwe move codegen to a better state.\r\n\r\nAn additional `iree-llvmcpu-fail-on-large-vector` flag is added. It is\r\non by default; it is a way to bypass the check for developers.\r\n\r\nAdditional changes:\r\n\r\n- Update the preset tile sizes in `lowering_config.mlir`. It does not\r\nmatter because it is just testing the e2e flow for preset compilation.\r\n- Disable the check for `sdxl-scheduled-unet-3-tank` model. It should be\r\nremoved after we fix the attention codegen issue.\r\n- Disable the `sdxl-vae-decode-tank` model because of bad attention\r\ncodegen. Different from the other one, it is generating ops with\r\nvector<512x32xf16> types, which should really just fail.\r\n\r\nFixes https://github.com/iree-org/iree/issues/17486\r\n\r\n---------\r\n\r\nSigned-off-by: hanhanW ","shortMessageHtmlLink":"[CPU] Signal errors if there are large vectors. (#17620)"}},{"before":"c2f5b0c792710f88b7677db3649026f978ef8428","after":"fbe523895fe192c73bf1f3e86424885ad593c5d5","ref":"refs/heads/shared/transpose-fused-attention","pushedAt":"2024-06-12T21:12:25.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Groverkss","name":"Kunwar Grover","path":"/Groverkss","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/51270680?s=80&v=4"},"commit":{"message":"add llvm-project changes","shortMessageHtmlLink":"add llvm-project changes"}},{"before":"429465963892dbf16965d2345fda560efc5b9265","after":"c2f5b0c792710f88b7677db3649026f978ef8428","ref":"refs/heads/shared/transpose-fused-attention","pushedAt":"2024-06-12T21:09:07.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Groverkss","name":"Kunwar Grover","path":"/Groverkss","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/51270680?s=80&v=4"},"commit":{"message":"add llvm-project changes","shortMessageHtmlLink":"add llvm-project changes"}},{"before":null,"after":"ca999ca532659d6269007a917dcc21e2b6351975","ref":"refs/heads/shared/sdxl_sprint_2","pushedAt":"2024-06-12T19:46:49.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"MaheshRavishankar","name":null,"path":"/MaheshRavishankar","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1663364?s=80&v=4"},"commit":{"message":"continue to address comments\n\nSigned-off-by: Bangtian Liu ","shortMessageHtmlLink":"continue to address comments"}},{"before":"abf008703c9c57d755bbea1197d9d8f062b0392e","after":"0a561c47c3dc77ce038aef0ca9bf8d8f02ff2f2a","ref":"refs/heads/main","pushedAt":"2024-06-12T17:05:42.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"qedawkins","name":"Quinn Dawkins","path":"/qedawkins","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22101546?s=80&v=4"},"commit":{"message":"[Codegen][GPU] Make operand promotion pattern work with generics (#17650)\n\nThe pattern was previously using the `isMatmulOrBatchMatmul` helper that\r\nonly looked for named ops. Change the logic to use inferred contraction\r\ndims and look at the static bounds of the op to filter out matvec cases.","shortMessageHtmlLink":"[Codegen][GPU] Make operand promotion pattern work with generics (#17650"}},{"before":"b21141549dd91bf3a7aba337eb03a7d06019c5c2","after":"8e0dcd94a014cb70b43f0ace79daff4163b5eaf4","ref":"refs/heads/gh-pages","pushedAt":"2024-06-12T14:58:22.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"iree-github-actions-bot","name":null,"path":"/iree-github-actions-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/62120345?s=80&v=4"},"commit":{"message":"Deployed abf0087 with MkDocs version: 1.6.0","shortMessageHtmlLink":"Deployed abf0087 with MkDocs version: 1.6.0"}},{"before":"52b21f8274f7e62d7c44e4c8b7b2147a00016bc0","after":"abf008703c9c57d755bbea1197d9d8f062b0392e","ref":"refs/heads/main","pushedAt":"2024-06-12T14:54:06.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"Groverkss","name":"Kunwar Grover","path":"/Groverkss","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/51270680?s=80&v=4"},"commit":{"message":"[LinalgExt] Add online_attention op (#17536)\n\nThis patch adds a new online_attention op. This op represents a\r\npartially reduced attention op which can be tiled along it's k2\r\nreduction dimension. This op also has indexing maps, supports tiling on\r\nall dimensions other than k1 dimension, and can decompose based on any\r\ngiven indexing maps.\r\n\r\nThis patch also makes the CPU backend use online attention to decompose\r\nand tile reduction dimension, allowing it to be tiled along N and batch\r\ndimensions, and tiling using LLVMCPUTile.","shortMessageHtmlLink":"[LinalgExt] Add online_attention op (#17536)"}},{"before":"fea407088086667282bb22fbdcbe814343434051","after":"b21141549dd91bf3a7aba337eb03a7d06019c5c2","ref":"refs/heads/gh-pages","pushedAt":"2024-06-12T13:32:24.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"iree-github-actions-bot","name":null,"path":"/iree-github-actions-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/62120345?s=80&v=4"},"commit":{"message":"Deployed 52b21f8 with MkDocs version: 1.6.0","shortMessageHtmlLink":"Deployed 52b21f8 with MkDocs version: 1.6.0"}},{"before":"cda3ccb052be5aa81e3a3e7f5683b9ba39a8e55b","after":"52b21f8274f7e62d7c44e4c8b7b2147a00016bc0","ref":"refs/heads/latest-snapshot","pushedAt":"2024-06-12T13:28:12.000Z","pushType":"push","commitsCount":4,"pusher":{"login":"iree-github-actions-bot","name":null,"path":"/iree-github-actions-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/62120345?s=80&v=4"},"commit":{"message":"[GPUHeuristic] Modify schedule generator to consider distribution of tranfer_read layout anchor (#17636)\n\nModify heuristic to take into account layout of transfer reads, S.T we will not generate invalid schedules who's transfer read cannot be distributed because the sizes do not match up.\r\n\r\nFor example in one matmul with N-dim with these sizes\r\n[wgTileSize, elemPerThread, threadSize] = [192, 8, 128]. \r\nThere is no good layout for this because, the numbers of threads\r\nneeded would be 192/8 == 24, and Since the threadSize pre-determined by \r\nschedule is 128, we will have 128 % 24 != 0. Hence we cannot distribute it.\r\n\r\nThis patch introduce constraints in our heuristic to solve these cases.\r\n\r\n---------\r\n\r\nSigned-off-by: stanley-nod ","shortMessageHtmlLink":"[GPUHeuristic] Modify schedule generator to consider distribution of …"}},{"before":"c1e542d6370473244a8fa9178615cb8a6041b489","after":"52b21f8274f7e62d7c44e4c8b7b2147a00016bc0","ref":"refs/heads/main","pushedAt":"2024-06-12T03:19:39.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"raikonenfnu","name":"Stanley Winata","path":"/raikonenfnu","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/68087699?s=80&v=4"},"commit":{"message":"[GPUHeuristic] Modify schedule generator to consider distribution of tranfer_read layout anchor (#17636)\n\nModify heuristic to take into account layout of transfer reads, S.T we will not generate invalid schedules who's transfer read cannot be distributed because the sizes do not match up.\r\n\r\nFor example in one matmul with N-dim with these sizes\r\n[wgTileSize, elemPerThread, threadSize] = [192, 8, 128]. \r\nThere is no good layout for this because, the numbers of threads\r\nneeded would be 192/8 == 24, and Since the threadSize pre-determined by \r\nschedule is 128, we will have 128 % 24 != 0. Hence we cannot distribute it.\r\n\r\nThis patch introduce constraints in our heuristic to solve these cases.\r\n\r\n---------\r\n\r\nSigned-off-by: stanley-nod ","shortMessageHtmlLink":"[GPUHeuristic] Modify schedule generator to consider distribution of …"}},{"before":"47d04ca8fcfbb1224a8e105a948e7ed685e6f3ff","after":"220fa82152bb0295655051ebb3d002df7e7494a9","ref":"refs/heads/mahesh/tiling_interface","pushedAt":"2024-06-11T23:00:04.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"MaheshRavishankar","name":null,"path":"/MaheshRavishankar","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1663364?s=80&v=4"},"commit":{"message":"Fixes for https://github.com/llvm/llvm-project/pull/91878\n\nSigned-off-by: MaheshRavishankar ","shortMessageHtmlLink":"Fixes for llvm/llvm-project#91878"}},{"before":null,"after":"47d04ca8fcfbb1224a8e105a948e7ed685e6f3ff","ref":"refs/heads/mahesh/tiling_interface","pushedAt":"2024-06-11T22:47:20.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"MaheshRavishankar","name":null,"path":"/MaheshRavishankar","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1663364?s=80&v=4"},"commit":{"message":"[Do Not submit] Enable CI on branch.\n\nSigned-off-by: MaheshRavishankar ","shortMessageHtmlLink":"[Do Not submit] Enable CI on branch."}},{"before":"668352671a5c47017e0fec942208f27d826d5f84","after":null,"ref":"refs/heads/mac-runner","pushedAt":"2024-06-11T19:15:45.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"jpienaar","name":"Jacques Pienaar","path":"/jpienaar","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/706766?s=80&v=4"}},{"before":"6e1d80a96cb9f93ad90978685b032d5ebfa2cbe0","after":"c1e542d6370473244a8fa9178615cb8a6041b489","ref":"refs/heads/main","pushedAt":"2024-06-11T19:15:44.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jpienaar","name":"Jacques Pienaar","path":"/jpienaar","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/706766?s=80&v=4"},"commit":{"message":"Change macos runner to regular (#17634)\n\nNot sure if we should perhaps delete this one completely for now.\r\n\r\nci-exactly: build_test_all_macos_x86_64\r\n\r\n---------\r\n\r\nCo-authored-by: Scott Todd ","shortMessageHtmlLink":"Change macos runner to regular (#17634)"}},{"before":"db7974c26549e3700923244ac1847b032013f898","after":"6e1d80a96cb9f93ad90978685b032d5ebfa2cbe0","ref":"refs/heads/main","pushedAt":"2024-06-11T17:54:45.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"IanWood1","name":"Ian Wood","path":"/IanWood1","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/75152913?s=80&v=4"},"commit":{"message":"[Flow] Make the output indexing_map of elementwise ops identity. (#17583)\n\nContinuing https://github.com/iree-org/iree/pull/17262. Just moved logic\r\ninto fusion preprocessing\r\n\r\n---------\r\n\r\nSigned-off-by: Ian Wood \r\nCo-authored-by: hanhanW ","shortMessageHtmlLink":"[Flow] Make the output indexing_map of elementwise ops identity. (#17583"}},{"before":"0539ca4f294b1aabcfe75358c78b590eac34bc1f","after":"89d45974e3e4f24e4a08e2b7a84b886e087cd2b3","ref":"refs/heads/users/benvanik/device-attrs","pushedAt":"2024-06-11T16:16:15.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"benvanik","name":"Ben Vanik","path":"/benvanik","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/75337?s=80&v=4"},"commit":{"message":"[WIP] Changing stream conversion to use a value/op affinity analysis.","shortMessageHtmlLink":"[WIP] Changing stream conversion to use a value/op affinity analysis."}},{"before":"d21a01b7c82e52b2892e675760ece93c9dd0495d","after":"1c6f334bb7166680dd9e6cab9d4ebc60e154a6e9","ref":"refs/heads/shared/igemm","pushedAt":"2024-06-11T16:07:03.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"qedawkins","name":"Quinn Dawkins","path":"/qedawkins","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22101546?s=80&v=4"},"commit":{"message":"Pin llvm version to shared/igemm","shortMessageHtmlLink":"Pin llvm version to shared/igemm"}},{"before":"292cfbea94090c8e992fcafb6bf26e97099f59cc","after":"d21a01b7c82e52b2892e675760ece93c9dd0495d","ref":"refs/heads/shared/igemm","pushedAt":"2024-06-11T14:54:30.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"qedawkins","name":"Quinn Dawkins","path":"/qedawkins","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22101546?s=80&v=4"},"commit":{"message":"[Codegen][GPU] Add producer fusion pattern to loop fusion and hoisting pass","shortMessageHtmlLink":"[Codegen][GPU] Add producer fusion pattern to loop fusion and hoistin…"}},{"before":"cda3ccb052be5aa81e3a3e7f5683b9ba39a8e55b","after":"db7974c26549e3700923244ac1847b032013f898","ref":"refs/heads/main","pushedAt":"2024-06-11T14:21:13.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"zero9178","name":"Markus Böck","path":"/zero9178","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6625526?s=80&v=4"},"commit":{"message":"[util] Add serialization support for `f64` resources (#17640)\n\nSerializing `f64` resources was strangely omitted from the logic while\r\n`f32` and `f16` support is present. Running things with `f64` is\r\ncertainly not a good idea in general but relatively well-supported in\r\nthe LLVM backend. Our use-case is the bring-up of a custom compiler\r\nbackend where `f64` happens to be the most trivial element type to\r\nsupport.\r\n\r\nSigned-off-by: Markus Böck ","shortMessageHtmlLink":"[util] Add serialization support for f64 resources (#17640)"}},{"before":"05c04d472c88adff701125e1cf7f1d38e54178ae","after":"429465963892dbf16965d2345fda560efc5b9265","ref":"refs/heads/shared/transpose-fused-attention","pushedAt":"2024-06-11T14:01:49.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"Groverkss","name":"Kunwar Grover","path":"/Groverkss","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/51270680?s=80&v=4"},"commit":{"message":"add llvm-project changes","shortMessageHtmlLink":"add llvm-project changes"}},{"before":"3569161b72a1606c2645194e72e478032e734a68","after":"fea407088086667282bb22fbdcbe814343434051","ref":"refs/heads/gh-pages","pushedAt":"2024-06-11T13:31:12.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"iree-github-actions-bot","name":null,"path":"/iree-github-actions-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/62120345?s=80&v=4"},"commit":{"message":"Deployed cda3ccb with MkDocs version: 1.6.0","shortMessageHtmlLink":"Deployed cda3ccb with MkDocs version: 1.6.0"}},{"before":"f062b19dfffeaf4e567b6f8793e0f40ff08cf525","after":"cda3ccb052be5aa81e3a3e7f5683b9ba39a8e55b","ref":"refs/heads/latest-snapshot","pushedAt":"2024-06-11T13:26:15.000Z","pushType":"push","commitsCount":9,"pusher":{"login":"iree-github-actions-bot","name":null,"path":"/iree-github-actions-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/62120345?s=80&v=4"},"commit":{"message":"[GPU] Enable tensor.pack e2e tests for rocm backend. (#17587)\n\nThe SplitFullPartialTransferPass and WorkgroupSpecializationPass are no longer needed because we have\r\nmuch more mature vector lowering. It was added long time ago.\r\n\r\nProgress on https://github.com/iree-org/iree/issues/17186\r\n\r\n---------\r\n\r\nSigned-off-by: hanhanW ","shortMessageHtmlLink":"[GPU] Enable tensor.pack e2e tests for rocm backend. (#17587)"}},{"before":"43c13b5bac3d09fff52e81fdceb7d22a421597ba","after":"05c04d472c88adff701125e1cf7f1d38e54178ae","ref":"refs/heads/shared/transpose-fused-attention","pushedAt":"2024-06-11T11:22:08.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"Groverkss","name":"Kunwar Grover","path":"/Groverkss","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/51270680?s=80&v=4"},"commit":{"message":"[Flow] Fuse attention with transpose as seen in SDXL. (#17639)\n\nSigned-off-by: MaheshRavishankar ","shortMessageHtmlLink":"[Flow] Fuse attention with transpose as seen in SDXL. (#17639)"}},{"before":"d7744b74023c6096d6d11c97d9d0f0277edda28a","after":"cda3ccb052be5aa81e3a3e7f5683b9ba39a8e55b","ref":"refs/heads/main","pushedAt":"2024-06-11T01:15:50.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"hanhanW","name":"Han-Chung Wang","path":"/hanhanW","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6949191?s=80&v=4"},"commit":{"message":"[GPU] Enable tensor.pack e2e tests for rocm backend. (#17587)\n\nThe SplitFullPartialTransferPass and WorkgroupSpecializationPass are no longer needed because we have\r\nmuch more mature vector lowering. It was added long time ago.\r\n\r\nProgress on https://github.com/iree-org/iree/issues/17186\r\n\r\n---------\r\n\r\nSigned-off-by: hanhanW ","shortMessageHtmlLink":"[GPU] Enable tensor.pack e2e tests for rocm backend. (#17587)"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEY9ncHwA","startCursor":null,"endCursor":null}},"title":"Activity · iree-org/iree"}