逆向记录(代号grisen)

前言

有朋友拜托我看一下某个四月份的新游戏…
省流:和之前这个研究完全一致,不用看了。

正文

小背景

教程不算长,播教学CG时f12看了眼

1
https://game.**********.jp/AssetBundles/DmmR18Web/movie/harem/chara0000/be812c6b9482ba61e437eb836d479501a15b1f1c91d0acdda7c80009494d4a89.assetbundle?param=3f0ab091d7acb84ae47f834a6da5ab6d

很眼熟

开始研究

感觉完全就是之前某次研究里的
下载一份apk看了眼dump.cs,完 全 一 致

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// Namespace: App
public static class AssetBundleDownloadUtility // TypeDefIndex: 8700
{
// Fields
public static readonly string HASH_SALT; // 0x0
public static readonly int HASH_COUNT; // 0x8

// Methods
// RVA: 0x145B014 Offset: 0x145B014 VA: 0x145B014
public static void customizeWebRequest(UnityWebRequest webRequest) { }
// RVA: 0x145B148 Offset: 0x145B148 VA: 0x145B148
public static string makeAssetBundleFileNameHash(string assetBundlePath) { }
// RVA: 0x145B2C0 Offset: 0x145B2C0 VA: 0x145B2C0
private static void .cctor() { }
}

静态分析

不是很需要再导一遍global metadata了,直接看吧
测试代码如下(如果你能猜到是哪个游戏,改一下cdn链直接用吧,懒得传gist):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
import os
import urllib.request
import hashlib
import UnityPy #run 'pip install unitypy==1.6.7.2' first

gurizaia_cdn_header = 'https://game.***********.jp/AssetBundles/DmmR18Web/'

HASH_SALT = 'riznqfd7sj5rtw8gvfbur4fysdferto834hvfnds8'
HASH_COUNT = 11
XOR_VAL = b'\x2C'

def makeSha256Hash(text, salt = HASH_SALT, loopHashCount = HASH_COUNT):
text_salt = f'{text}{salt}'
b_text_salt = text_salt.encode('utf-8')
for i in range(loopHashCount):
if i >= 1:
b_text_salt = baxor(b_text_salt, XOR_VAL * len(b_text_salt))
b_text_salt = hashlib.sha256(b_text_salt).digest()
hashRes = res = ''.join(format(x, '02x') for x in b_text_salt)
return hashRes

def baxor(ba1, ba2):
return bytes(a ^ b for a,b in zip(ba1, ba2))

def dump_filelist(manifest, output):
env = UnityPy.load(manifest)
for o in env.objects:
data = o.read()
if data.name == 'AssetBundleManifest':
parsed_list = []
for key in data.type_tree['AssetBundleNames']:
parsed_list.append(data.type_tree['AssetBundleNames'][key])
parsed_list.sort()
with open(output, 'w', encoding='utf-8-sig') as f:
for p in parsed_list:
dirname, basename = os.path.split(p)
filename, fileext = os.path.splitext(basename)
hashname = makeSha256Hash(filename)
f.write(f'{p},{gurizaia_cdn_header}{dirname}/{hashname}{fileext}\n')

def main():
response = urllib.request.urlopen(f'{gurizaia_cdn_header}DmmR18Web').read()
dump_filelist(response, 'manifest.csv')

if __name__ == '__main__':
main()

结语

毕竟是同一家公司,祖传代码可以理解。
顺便一提也关服了。